Wednesday, 16 November 2011

Do not rewrite

Just don’t do it

Instead evolve

Many (99.99%) of developers continually insist that whichever application they work on needs a rewrite. As in scratch everything, redesign the architecture and technology choices and rewrite every component because the current choice and state is not ideal. I used to feel and say this as well.

Any code I have written (and even worse if written by others) which I need to update/maintain will look bad after only 6 months. After 2 years it smells of bad architecture design and logic. After 5 years the code base feels like a mess, totally unmaintainable.

This is slightly because technology has moved on, and my skills or rather preferences have evolved. But it is mostly due to that over time any application has been updated due to bug fixes and new features, and the maintainance might have been handed over to other people several times, with different skill level or understanding of the architecture. This bloats the code structure and exceeds what the original and probably quite tidy structure was intended for.

Any further maintenance is more and more costly as it takes longer and/or more people to make any changes to the application. And any innovation is muffled as inspiration is dampened.

Many companies makes a good living when other companies outsources the costly maintenance of these systems.


I used to feel the need to rewrite in every new assignment but not anymore. My advice now is: Please do not rewrite


Why are rewrites a bad idea?


No value

If you undertake a rewrite of your core application it will be very costly for your business, with no initial return on investment. Take 6 -12 months of pure losses and in the end the customer (internal or external) has not received any extra value. Most likely they will have less features as scope was reduced, but rarely any new features. Future performance gains and maintainability is a plus but not of value there and then. So it is a huge hit on fragile budgets or even liquidity of smaller companies, with little to no benefit.

This is the main reason not to rewrite, the business side gains nothing from it, it is a pure expense. Any reasonable business person would not approve it. And any future relationship will quite likely be very damaged.


Never finishing

Another risk is that the rewrite takes too long and either never really finishes or is cancelled. Making it an expensive task with no value now nor in the future.

Also rewrites that span a long time usually suffers from that the decisions made at the beginning start to look outdated and wrong before it is even in production and before any future value is gained.


Evolve


However I do not mean you should not evolve your application. Do not keep status quo either. If it is an important core application, you should evolve it, but not perform a total rewrite.

Instead do it in smaller steps while adding value and features for the customer. This way your product is cleaner and up to date, but the business side does not take a huge hit. This is also much more predictable and less volatile. Which make is less likely the project will be cancelled or worse for smaller companies which could go bankrupt.


How do you go about evolving and not rewriting? This very much depends on your type of application.

(If it is a tiny application, go ahead rewrite it. Just make sure the dependant systems are aware of this).


Refactor

On a minor level, continual process of refactoring, with larger refactoring a frequent acceptable task. This should slow down code rot, postponing the need for larger changes a while.


Modularisation

Eventually your application will need a drastic change. If by chance your original design was quite modular or less coupled with other systems you made evolving smoother. If not, try to modularise your current design.

By being modular you can still perform rewrites, but on much smaller scale, taking one module at the time. So that it is not a 12 months loss, but perhaps 1 month loss. While normal work on other modules can still continue. Much less likely to kill your project/company.


Parallel systems

Instead of a hard one day switch over from the old system to the new system, instead run two systems in parallel for awhile, the old legacy system and the new clean system.


Skinny interface adapter layer

If modularised or not, a cleaner way to evolve and replace sections of your application is to introduce an adapter layer for your interface, and keeping it skinny.

This interface adapter only relays between your system/application and other systems. When other systems only talk via the adapter you are free to change your system without affecting external systems.

More importantly you can now split your system into two systems and relay certain calls to the new system instead of the old, without external systems are aware nor affected by this.

The adapter layer is not always possible, or people are resistant to introduce yet another layer/system, but it really makes your much more adaptable in changing your own architecture, you can move elements to a new system and roll back without external costs.


No logic in adapter layer

Do not ever put any logic in interface adapter. Otherwise you end up with yet another system to maintain. Unfortunately this happens quite often when bad planning and management leads to shortcuts which adds logic in the adapter layer. And then you have just multiplied your maintenance costs. And believe me that logic is then rarely temporary.


ESB

Do not interpret this adapter layer requirement as a reason for introducing an ESB. ESB tend to just camouflage the spaghetti integration code, and often introduce above mention unwanted logic. However if you already have an ESB in place it can perform the adapter layer requirement.


Legacy system becomes adapter layer

Another method implementing the adapter layer is to use the existing system to relay calls to the new system instead of handling them themselves. Internally in the existing legacy system you have to do this anyway on each element migrated to the new system, so you can also allow external system to do this.

It is however cleaner to have a separate adapter layer. This will be easier to maintain as probably designed on newer technology and platforms. And allows you switch off the legacy system when you can. But also makes so less likely for people to just decide to use the legacy system instead.


Topic / feature

An even better rewrite strategy/evolution than per module is per topic / feature. This is an even smaller grained change and less risky. As part of more agile strategies you can move one feature at the time to the new clean system.

With an adapter layer this switching is smooth, but not restricted to an adapter layer. Without the adapter you just have more administration of changing every dependant system for each feature moved.

Feature toggles might be part of this strategy.



New feature -> new system
Update feature -> new system


Every new feature request is naturally going to be implemented on the new system. But a good method of choosing which existing features to move is choosing a feature which there is a requirement to modify. Then not touch it on the old system but instead rewrite and update it on the new system. This is then a good carrot/stick way to ensure the migration is performed. And ensures that the company receive some value for each rewrite.


Do not update old(legacy) system/application

Another very important rule is not to update the old legacy system at all when each feature/module is migrated off it. It is so tempting to make a shortcut and update both system as perhaps you have not introduced an adapter layer or instructed enough external systems to use the new system. This will kill your migration/rewrite.

For this step the leadership of your team needs to be firm about and insist on. Do not end up maintaining two systems (or 3 with the adapter layer).


Ensure legacy system delegates

The legacy system will undoubtedly internally refer to the rewritten and migrate module or feature. You need to ensure the legacy system now delegates internally to the new system. This is the only update you should do on the old system. Otherwise again you will need to maintain two systems, and run the risk of the legacy system and the new system executes slightly differently for the same task, leading to people insisting on using the legacy system.


Kill the legacy system

You need to plan and ensure the old system is eventually switched off. Otherwise you will still need to keep people and skills on how to maintain that system. It may be tempting and erroneously to leave some elements behind that should be on the new architecture.

There may be some elements not needed on new system or unrelated that is left on the old system. But probably better to even move these to another new system than keep the old system lying around and increasing the cost of maintaining that small element as well.


Kill the adapter layer?

Once you killed the legacy system, do you want to keep the adapter layer? Keep in mind you might want to move on from the new system when it starts to rot as well. However it may be tidier and less complicated to kill the adapter layer as well. I would kill it and if needed in the future reintroduce it instead.


Do not stop half way

If for some reason the migration to a new system is stopped, either due to reprioritisation, lack of progress etc, then you are stuck with maintaining not 1 but 3 systems. Many companies end up burned by this and is mostly down to not strong enough leadership/management.


So my point is to rewrite small topics/features then modules, but never the whole application. This way value is introduced along the way without a budgetary black hole.


References

Many, but some from former colleague Anders Sveen




Tuesday, 30 August 2011

Null is okay

Most of the bugs that you find or are reported to you in java applications are NullPointerExceptions [1]. NPE are rife in the beginning of a product’s life cycle, and they never go away.


The consequences are bug fixing which checks if:
  • input is null

  • if interface calls return null

  • if method calls return null

  • if properties and collections are null


So this original pizza ordering method:
public void orderPizza(PizzaOrder pizzaOrder){
  for( Pizza pizza : pizzaOrder.getPizzas()){
    kitchen.makePizza(pizza);
  }
}


Morphs into this:
public void orderPizza(PizzaOrder pizzaOrder){
  if( pizzaOrder != null) {
    if( pizzaOrder.getPizzas() != null ||
        Collection.isEmpty(pizzaOrder.getPizzas())) {
      if( kitchen != null) {
        for( Pizza pizza : pizzaOrder.getPizzas()){
          if( pizza != null) {
            kitchen.makePizza(pizza);
          } else {
            throw PizzeriaTechnicalException(“No pizza!!!”);
          }
        }
      } else {
        throw PizzeriaTechnicalException(“No kitchen!!!”);
      }
    } else {
      throw PizzeriaTechnicalException(“No pizzas in order!!!”);
    }
  } else {
    throw PizzeriaTechnicalException(“No pizza order!!!”);
  }
}

(Or alternativly many “assert pizzaOrder != null” that are useless in production)

Madness, and very messy.

Blinkered coding standard:
This happens when code ownership is not clear, when desire for clean code is not existant, when fixing bugs with blinkers on.
When developers are time constrained or scared of changing code.
When development is outsourced but authority and knowledge is not, so refactoring is avoided or not part of SLA.


Why are we so afraid of NullPointerExceptions?
What is wrong with null?


Null is okay. Null is exactly that. Don't avoid it like the plague, but try to not pass the problem to others. Tidy up your gateway interfaces, trust internally null does not happen. If it does then don't hide the problem.

Do not check for null inside you application. Check for null on your edges:

Validate user interface inputs. “Users” are always causing trouble, but we can not live without customers. Validate the essentials, but remember duck walking, don’t scare your customers with perfection validation.

Validate external 3rd party api interfaces. Probably validate internal api interfaces. Remember duck walking. If an internal system quite close to your application erroneously passes null values, fix that system instead. It is a gray zone however so some null pointer validation is fine. Just don’t complicate your system because of the laziness of another within your team/company.

Do not validate calls between your applications layers. Trust the edges has validated the data. Use unit tests to ensure integrity.

Dealing with NPE/Null?


If you encounter a Null or anything resembling empty when not possible etc: throw a NullPointerException. Don’t sugar coat and masquerade it as something else. An NPE is an NPE! It says what it is on the tin! It is easily understood by all developers, and easily traced. Obviously don’t show that to the end user, but don’t catch it too far down your stack.

If you get bug report about an NPE. Fix the edge validation, factory generation or inside the api call. Do not add a “== null“ condition inside your application.

It is easier to avoid internal NPE by not passing external DTO and domain objects through your own application. If you act on it, it should be your applications objects, not an external one. This avoids complicated null pointer validation.

“Clean Code” book suggest “specific case object” for method calls that may previously return null. I am not a big fan, but it may solve internal api NPEs. I definitely support the pattern of returning empty collections in find* calls instead of null when nothing was found.


How would I deal with Null in the Pizza Order scenario?

PizzaOrder object should be validated beforehand in the interface layer. It is not the responsibility of this method.

Same layer before this method should call a factory method that sets pizzas collection to an empty collection if no pizzas.

Kitchen is always injected or constructed externally to this method and not its responsibility.

Factories in mapping/creating the PizzaOrder before this method call should also ensure no Pizza objects are Null.

The one possible valid check is if there are any pizzas in the order. The Collection.isEmpty(pizzaOrder.getPizzas()) But it is not a null pointer check, as previous interfaces/factories will ensure a collection object exists. However the interface or application before this method call should probably have validated the actual order, to ensure an order contains pizzas!


So Null is okay and NullPointerException can be your friend.

Tuesday, 16 August 2011

Boobs, breast and Tits. Functional or attractive?

A different post than my normal rambling:

Our beautiful baby daughter was recently born, and is in the middle of breastfeeding. And this constant feeding got me thinking of how I was reacting to this exposure.

Basically at the moment I am only thinking of my partners boobs as a utility to feed our daughter. They have purely a functional property.

It is quite the same when I see other mothers feed their babies on park benches etc. Mostly I think it is just sweet or indifferent to it.

So basically this body part which often is connected with attraction and excitement, is by introducing a baby changing my perception completely. And I would guess this reaction is similar with most other males, at various degrees (and probably some exceptions).


But remove the baby and any pregnancy references, if I catch a glimpse of a breast or cleavage I still snigger like a teenager...


Well at least I got to have "boobs" in a blog title, instead of my normal geeky techy posts :)



Tuesday, 3 May 2011

Ubuntu releases

Ps. This is not directly regarding the latest 11.04 Natty release with the infamous Unity UI. (As of 03.05.2011) I am on 10.04 Lucid on my desktop and 10.10 Maverick on my servers (I know, it should be the other way round...), so I have not even tried Unity yet.

Current release schedule

Ubuntu releases a new version every 6 months, in April and October. They are supported for 18 months. I do not really have an issue with this frequency.

Every 2 years one of these releases is a LTS, Long Term Support, version that they will support for 3 to 5 years. This is the release to use for more production systems.

In practice

In practice this 6 month release schedule means a lot of bleeding edge software and versions go into every version. This is good. It means the distribution is up to date, and the software is pushed and cannot rest on its laurels and falter.

Most software versions are stable enough, but however new divergences are often not polished enough and lack enough extensions, documentation, etc. E.g. Unity, Gnome 3, GDM2, Plymouth etc. For the initiated that will research solutions and like dabbling with new software this is not a big issue. For the vast majority of users that just wants something that works, this is risky and often backfires on their impression of Linux and Ubuntu.

Also a 6 months release schedule does mean quite a frequent upgrade requirement. Especially for the uninitiated non heavy Linux fanboys (if such a name exists). But you have to set the line somewhere, and a more frequent upgrade does mean a smaller delta difference and less chance of broken upgrades. On production servers however, 6 months upgrade schedule is a non starter. The LTS schedule is thus more suitable.

LTS

Unfortunately the LTS version is not always the version Ubuntu "promote". After a newer minor version has been released it takes a back burner. Too much emphasis I feel on promoting, discussing and supporting the newer versions and not backporting enough to the "stable" LTS version.

Also when an LTS version is released it is "promoted" as LTS immediately. Being a "major" release a lot of people upgrade to it soon after the release date, but there are still many teething errors before it is more "solid", usually then they release a X.x.1 version.



Small modification

They should promote LTS to everyone, and let fanboys use the latest non LTS versions. This way the keen users will still be up to date, and will ensure fixes and velocity of the features. But they will keep a very stable version for the majority users.

They should not apply the LTS "brand" to the version until the X.x.1 version a few months after the general X.x release. That way teething errors are never found in an LTS version, and more trust can be applied to the version branding.

They should include more backports to the LTS versions. After about a year my LTSes are a nuisance as their packages are too out of date.

Large modification

LTS current biennial release schedule is a large gap. What about a LTS every year or 18 months?

Have a "major", "minor", "tiny" or better "solid", "stable", "unstable" release versioning? Or an unstable release every 4 months(fanboys), stable every 8 (desktops) and solid(servers)(LTS) every 16 months? Too much admin or release confusion perhaps?


While debian's stable/testing/unstable naming is a close match, Ubuntu has always been relible. But lately I have been reluctant to upgrade, waiting months before all initial problems are out of the way before I dabble. And now I usually skip a release or two every time.

Summary

Basically change the promotion of LTS. Let the LTS be the default version. And do not call it LTS until the Ubuntu x.x.1 teething problem bugfix version is released.

And optionally change the schedule to more frequent LTS, or even 3 levels of stability/support releases.

Wednesday, 20 April 2011

Required reading

Should a development team/department have a required reading list?
A list containing books (articles etc) that each member should read.


I think so.


Required

I am not sure if “required” is the right concept, “strongly encouraged”, incentive linked or similar is perhaps enough emphasis.

Also “Should read” does not mean all books have to have been read before joining the team, more a list of books to read while in the team.


Why

To gather a common basis for discussions and choices within the team. To grow the competence of the team and ensure more correct decisions by having more relevant information. By reading and updating themselves the team is exposed to new ideas and can discuss and grow their understanding of this at work.


Unanimous agreement

It should not be requirement to agree with the contents and aim of every book, however it should be a requirement to have read it so that it can be discussed. In favour or quite the opposite only adds to the discussion quality.


Never read them all

You should never be finished with the the “Reading list”. If the list is too short and you have read all of them (or even based it on your own previously read books) then you become complacent and not open to further competences and ideas.


List evolution

The list should grow continuously and irrelevant/outdated items should be pruned from the list. The list should perhaps be put into sections, and also prioritised if large.


No reading

Team members that does not read nor update themselves in other ways, do you want them in your team? They may be great right now, but will they stay that way? I am always wary of people that do no expose themselves to new thinking. Can I really trust their convictions on a solution for an issue is the most prudent solution?

Some people are very productive and valuable without updating themselves. It is rare and uncommon however.


Pretend readers

Some colleagues will perhaps say they have read or reading books from the list, but have no intention of reading anything. There should probably not be any need to quiz them to ensure knowledge, in due time their evaporating competences will show their true colours anyway.


Book qualification

How do you select books on the list? Initially perhaps senior team members, architects etc could select a small selection of relevant books. But as the team grows the list should probably be suggested and voted on by the whole team to ensure common consensus and introduce newer not mainstream suggestions.


Reading speed

An interesting issue would be how to ensure that people do actively read. That is not just up to the individual, but should perhaps be part of incentives, part of project cost and time allocations. That is a harder sell, but will pay itself over time.

Some people though can read fast others not, some have no life and can read all weekend, others have 15 kids and no spare time. As long as they “are” reading I would be happy. 1 book per year or 10, it is all good.


Not related titles

Perhaps the list could contain subjects not related to the specific teams function. Fictional, philosophical books? Perhaps not.


My reading list

If I were to list books I would probably include many items I have not read, just skimmed or even agree with. I also do not have a great amount of time to read books either, but I force myself. I get the odd for free for reviewing, buy a few related to projects and a few hobby subjects, totaling 4-8 per year.

To list suggestions for teams similar to mine (enterprise Java, finance sector):



Any comments or suggestions are welcom.

Monday, 7 March 2011

Remove deadlines, increase productivity

In a recent project the tasks were always delegated and pointless deadlines set on everything. This seems so counter productive and ineffective compared to my past 5 years of agile based projects (inc maintenance).

As for task delegation I will need to evangelise more about the benefits of self organising and ownership through bottom up delegation, WIPs, velocity projection, prioritisation benefits etc.

My current beef is about noisy deadlines. Deadlines are for product owner / project managers. The team should never need to deal with deadlines. Especially when not meeting them has no consequence, so why have them at all? The team deals with prioritised tasks and if needed time boxed tasks. The tasks are divided up so that they are small enough so that a slip of 300% from estimation has no really impact as you are only talking about hours or days. However if the project velocity overt time does not seem to meet external deadlines then the scope of the backlog need to change, not to push developers/testers harder.

In agile teams they should not confuse deadlines for an estimation of how much work is left. If your current task may seem like another 2 days worth of work, do not set a deadline in two days. You may have interfering meetings or get stuck on a small issue that extend that tasks into next week. (Or hit an coding-happy-zone or eureka moment and finish in a few hours). Setting a deadline do not make you finish earlier and pushing developers/testers harder does not improve code quality. A task based deadline is only noise and stress.

Product owners can plan, communicate, etc with dates to customers, company boards or steering groups etc but they should be soft, movable dates and based on current project velocity, and no minor deadlines. If the velocity slows down, then reduce the scope of the backlog, or move the dates. Do not put pressure on the team with more micromanaged deadlines. It will only be noise and slow down development. The team does not really need to know specific dates apart perhaps from general roadmap (lavalamp?)for big releases.

True, some dates can not be moved, such as Christmas promotions, larger entities' deadlines etc, but with enough balls you would be surprised how many customer, senior management deadlines can be moved with good communication, even completely removed. Good project transparency and communication, and thus project velocity visibility is usually a much bigger relation and delivery benefit.

I agree that some pressure of delivery should exist but no cutthroat stress inducing deadlines. An agile process of pride in delivering tasks may quickly vanquish need for deadlines.







Wednesday, 26 January 2011

Create, populate and reset a dynamic database (HSQLDB,Hibernate,SQLMaven,DbUnit)

Thought I'd jot down how I create, populate and reset databases for development and testing. I use this method in my Java based pet projects such as Snaps [app][code] and Wishlist [app][code]. I also include this setup as default in my project template.

My projects are Maven based, using Jetty as java container through its maven plugin. In addition I also use JRebel, IntelliJ IDEA and Ubuntu, a setup I described in this howto: Ubuntu + IntelliJ + Maven + Jetty + JRebel, but none of these are required.

This setup is enables a fluid, very dynamic and quick development process. In addition I then use the in memory database HSQLDB for quick and standalone database interaction.

Using JPA/Hibernate with hsqldb, my tables can be created automatically when I start Jetty. However in developement and testing I prefer to have some default data pre populated.

This is where DbUnit comes in to play. With DBunit's maven plugin I can export my current data and populate future databases with the same data. As it is all easy to read XML I can edit manually the basic stub data as well.

However DbUnit can not create the database as it is run before jetty:run action, so I use the SQL-Maven plugin for this purpose. Again SQL Maven needs to know what tables to create, so I use the Maven Hibernate3 Plugin to export a schema from the JPA annotations.


This combination allows me to:

  1. Check out my project anywhere

  2. Create and populate the database with stub data with one command.

  3. Run the application via jetty

  4. Test the application straight away

  5. or develop dynamically with instant feedback



How to set up Hibernate, SQLMaven and DbUnit plugins



Prerequisites/assumptions

  • Java

  • Maven

  • Jetty (can be tomcat maven plugin)

  • JPA annotations

  • HSQLDB (using file based hsqldb persistance to survive restarts)



JPA annotations -> Schema: Hibernate


To export the JPA annotations into a database schema we include the Maven Hibernate3 Plugin in your pom.xml. This is quite a long plugin section as it defines a few dependencies which otherwise may be overriden by the plugin and lead to problems like this mapping exception.


<profile>
  <id>hbm-export</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
   <artifactId>hibernate3-maven-plugin</artifactId>
        <version>2.2</version>
        <executions>
          <execution>
            <phase>process-classes</phase>
            <goals>
              <goal>hbm2ddl</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <components>
            <component>
              <name>hbm2ddl</name>
      <implementation>jpaconfiguration</implementation>
            </component>
          </components>
          <componentProperties>
    <persistenceunit>${project.artifactId}</persistenceunit>
            <outputfilename>schema.ddl</outputfilename>
            <drop>false</drop>
            <create>true</create>
            <export>false</export>
            <format>true</format>
          </componentProperties>
        </configuration>
        <dependencies>
          <dependency>
            <groupId>hsqldb</groupId>
       <artifactId>hsqldb</artifactId>
            <version>${hsqldb.version}</version>
          </dependency>
          <dependency>
            <groupId>org.hibernate</groupId>
       <artifactId>hibernate-entitymanager</artifactId>
            <version>${hibernate.version}</version>
            <exclusions>
              <exclusion>
                <groupId>cglib</groupId>
         <artifactId>cglib</artifactId>
              </exclusion>
              <exclusion>
                <groupId>commons-logging</groupId>
    <artifactId>commons-logging</artifactId>
              </exclusion>
            </exclusions>
          </dependency>
          <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-core</artifactId>
            <version>${hibernate.version}</version>
            <exclusions>
              <exclusion>
                <groupId>cglib</groupId>
          <artifactId>cglib</artifactId>
              </exclusion>
              <exclusion>
                <groupId>commons-logging</groupId>
                <artifactId>commons-logging</artifactId>
              </exclusion>
            </exclusions>
          </dependency>
          <dependency>
            <groupId>org.hibernate</groupId>
      <artifactId>hibernate-annotations</artifactId>
            <version>${hibernate.version}</version>
            <exclusions>
              <exclusion>
                <groupId>cglib</groupId>
                <artifactId>cglib</artifactId>
              </exclusion>
              <exclusion>
                <groupId>commons-logging</groupId>
        <artifactId>commons-logging</artifactId>
              </exclusion>
            </exclusions>
          </dependency>
        </dependencies>
      </plugin>
    </plugins>
  </build>
</profile>


To run the schema creation action on its own:

mvn -DskipTests \

-Dhibernate.dialect=org.hibernate.dialect.HSQLDialect \

-P hbm-export compile hibernate3:hbm2ddl;


Note: We need to compile the JPA classes so that the plugin can extract annotation information from them.

Note2: that Hibernate/Red Hat have a bad track record of including their plugins and releases properly in maven central repository, so you might either want to adjust your dependeinces and repositories accordingly. Or you can use my proxy repository:


<repositories>
  <repository>
    <id>code-flurdy-repo</id>
    <name>code@flurdy repository</name>
    <url>http://code.flurdy.com/nexus/content/groups/noncentral</url>
    <releases><enabled>true</enabled></releases>
    <snapshots><enabled>true</enabled></snapshots>
  </repository>
</repositories>
<pluginRepositories>
  <pluginRepository>
    <id>code-flurdy-repo</id>
    <name>code@flurdy repository</name>
    <url>http://code.flurdy.com/nexus/content/groups/noncentral</url>
    <releases><enabled>true</enabled></releases>
    <snapshots><enabled>false</enabled></snapshots>
  </pluginRepository>
</pluginRepositories>


Please if your project is popular do not use my repo as it will explode my ec2/S3 data bandwidth usage!


Schema -> Database: SQLMaven



To create your database from this schema with SQL-Maven, add this to you pom.xml


<profile>
  <id>sqlmaven</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>sql-maven-plugin</artifactId>
        <version>1.3</version>
        <dependencies>
          <dependency>
            <groupId>hsqldb</groupId>
            <artifactId>hsqldb</artifactId>
            <version>${hsqldb.version}</version>
          </dependency>
        </dependencies>
        <configuration>
          <driver>${db.driverClassName}</driver>
          <url>${db.url}</url>
          <username>${db.username}</username>
          <password>${db.password}</password>
          <autocommit>true</autocommit>
     <srcFiles>
     <srcFile>target/hibernate3/sql/schema.ddl</srcFile>
          </srcFiles>
        </configuration>
      </plugin>
    </plugins>
  </build>
</profile>


To run the database creation action on its own:

mvn -DskipTests -P sqlmaven sql:execute;


Database population: DbUnit



To populate your database via DbUnit, add this to you pom.xml


<profile>
  <id>dbunit</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>dbunit-maven-plugin</artifactId>
        <version>1.0-beta-3</version>
        <dependencies>
          <dependency>
            <groupId>hsqldb</groupId>
            <artifactId>hsqldb</artifactId>
            <version>${hsqldb.version}</version>
          </dependency>
        </dependencies>
        <configuration>
          <dataTypeFactoryName>org.dbunit.ext.hsqldb.HsqldbDataTypeFactory</dataTypeFactoryName>
          <url>jdbc:hsqldb:file:${project.basedir}/target/db/build;shutdown=true</url>
          <driver>org.hsqldb.jdbcDriver</driver>
          <username>sa</username>
          <password></password>
          <type>CLEAN_INSERT</type>
          <src>src/test/data/stub.xml</src>
        </configuration>
      </plugin>
    </plugins>
  </build>
</profile>


First you need data to export. I suggest you run jetty:run action and add/use the application to create data in the database. Eg.: Register a user, create default domain objects, or whatever your application uses.

You then stop the application and export the data created:

mvn -DskipTests -P dbunit dbunit:export;

These are by default exported to target/dbunit/export.xml.

These are a good base data. You may try and adjust and extend this file if you like, but be aware that you might brake it, so prehaps try unadjusted initially.

Copy the export file to test/data so that it can be part of your source control etc. You may have noticed I already included the test/data as a src element in the plugin.

cp target/dbunit/export.xml test/data/stub.xml;

Run the database population action (remember to have a new clean database):

mvn -DskipTests -P dbunit dbunit:operation;


Example configuration of all these plugins can be found in this project's pom.xml.


Combined




Above are the basic steps. These I then aggregate into a couple of common one liners:

Create database:

mvn -o -DskipTests \

-Dhibernate.dialect=org.hibernate.dialect.HSQLDialect \

-P hbm-export,sqlmaven \

compile hibernate3:hbm2ddl sql:execute;




Create and populate database:

mvn -o -DskipTests \

-Dhibernate.dialect=org.hibernate.dialect.HSQLDialect \

-P hbm-export,sqlmaven,dbunit \

compile hibernate3:hbm2ddl \

sql:execute dbunit:operation;




Reset database:

rm -rf target/db;

mvn -o -DskipTests \

-Dhibernate.dialect=org.hibernate.dialect.HSQLDialect \

-P hbm-export,sqlmaven,dbunit \

compile hibernate3:hbm2ddl \

sql:execute dbunit:operation;




Clean, rebuild database and run jetty:

mvn -o -DskipTests \

-Dhibernate.dialect=org.hibernate.dialect.HSQLDialect \

-P hbm-export,sqlmaven,dbunit \

clean compile hibernate3:hbm2ddl \

sql:execute dbunit:operation jetty:run;


As you can see these one liners can be quite long so I again wrap these into bash scripts that I put in a bin folder.

You can extend this further to include these plugins as enabled by default and the maven goals as part of your other goals: E.g. include hbm:export and sqlmaven as part of install or test command or dbunit:operation with the jetty:run command, so that they are totally automatic. However I prefer a bit more control of when my data is reset etc.


Summary



With these plugins I can in one command create and populate my database, so that my app is up and running with data very quickly. Hope this is of use to others.






Thursday, 6 January 2011

InfoQ Presentation Video Enlarger

I like Infoq, and especially their presentations.

These presentation page has 3 elements:

  • Top left: One small video showing the person presenting

  • Top right: One small window describing the presentation and presenters

  • Bottom: One large window showing the actual presentation slide



This layout works very well, and basically shows you what you need.

But...

The window is a bit small (320x240), especially on larger resolutions screens. Especially when the presenter also shows some code, which is usually shown in the video window.

So I wrote a little user script which enlarges this video window (480x320).
This script is only for the Google Chrome browser. A similar Firefox based GreaseMonkey script should be easy to convert.




To install click on my
InfoQ Presentation Video Enlarger user script available at
www.ivar.co.uk/cargo/infoq-enlarge-presentation.chrome.user.js.




You can download and look at the js in a text editor first.It really is a very small and easy to understand script.

Anyone can contribute changes if desired.