Wednesday, 30 January 2013

Heroku commands & tips


I wrote a quick howto of Heroku commands and tips I use all the time.

It covers creating instances, configurations, common add-ons such as databases & email, deployment strategies and more.

As mentioned in the document I tend to have multiple applications per project, so I always append --remote staging for example, resulting in this long deploy command:

git push staging master && \
heroku logs -t --remote staging; \
heroku open --remote staging && \
heroku logs -t --remote staging

For the eagle eyed you will have noticed the ";" which means as the log tailing never really finishes, when you notice the app is up and running you have to manually end it with control+c to proceed with opening up a browser.

Obviously these commands ties nicely into my use of Play on Heroku and the howtos I wrote on integrating both.

They are perhaps obvious commands and basic but I hope they are of use to some people.


Monday, 5 November 2012

Lean book review: Lean Architecture and Lean from the Trenches

Quick review of one of two technical/project management books I read lately.

First is "Lean Architecture" by James Coplien.

Lean Architecture: for Agile Software Development

Coplien is someone I have great respect for, I have listened to a few lectures by him and read several articles. He definitely seems very knowledgeable on the subject and good in panel debates against other self entitled agilistas. Some people may recognise him for his forword in the Clean Code book by Uncle Bob. So I was looking forward to read this book.

However this book was not great. It is very wordy and repetitive. The book keeps going of on a tangent about the history of agile and lean, which while nice is not why I bought the book. Only in the last few chapters does it actually get to the point of the book, the DCI architecture style.

If you want to learn about DCI (Data, Context & Interaction) then it may be the book for you, especially if you want to pick up history of The Toyota Way, Lean and Agile. Otherwise don't bother.



Another book I read is "Lean from the Trenches" by Henrik Kniberg.

Lean from the Trenches: Managing Large-Scale Projects with Kanban

Having previously read two of his other books I was expecting a helpful book. ("Kanban and Scrum - making the most of both" & "Scrum and XP from the Trenches"). Kniberg is very much a Kanban man so I was interested in his Lean views.

And this book I thought was very good. He adopts a reflection of a large scale project for the Police in Sweden, and the aspects they learned by adopting lean practices as they went along. And then in later chapters more detailed reflections and background on subject matters. (I suspect the authoring style leads readers to think it was all accidentally knowledge gained by the team along the way, but knowing his previous experience I am sure he nudged most in the right direction).

His writing style and diagrams are very easy to follow and I finished the book in a few days read on the commute, and was very inspired. Highly recommended.

Wednesday, 23 May 2012

Send email via SendGrid on Heroku using Play! 2.0 with Scala

If you have a Play! framework 2.0 application that you want to send email from, here are a few tips.

These tips assumes you deploy to Heroku, but other platforms should work similarly. The examples here are using Scala, but Java should work along similar lines. Finally the specifics are for the SendGrid add-on for Heroku, but other mail server providers should be fine.


First add the free option of the SendGrid add-on to your Heroku app by typing in:
heroku addons:add sendgrid:starter

Then configure your Play! app to use the mail plugin provided by Typesafe:

Add to your dependencies in the project/Build.scala file: (all on one line)
"com.typesafe" %% "play-plugins-mailer" % "2.0.2"
Then create and add these to a conf/play.plugins file: (all on one line)
1500:com.typesafe.plugin.CommonsMailerPlugin

Next configure the mail server settings. You can either add these directly to your conf/application.conf file, but I prefer to share my projects' source code, so my production settings are set via environment variables so that my username/password are not publicly available.

However for the plugin to run smpt.host  must be present. Open conf/application.conf and add:
smtp.host=mock

On Heroku I append the settings to the Heroku's propriatory Procfile file. I append these settings to the Procfile to use SendGrid's servers: (all on one line)
-Dsmtp.host=smtp.sendgrid.net -Dsmtp.port=587 -Dsmtp.ssl=yes -Dsmtp.user=$SENDGRID_USERNAME -Dsmtp.password=$SENDGRID_PASSWORD
You may already have other settings in the Procfile, e.g. database URL, so be aware of the 255 char limit, and use a custom properties file instead.
web: target/start -Dhttp.port=${PORT} 
-Dconfig.resource=heroku-prod.conf

The SendGrid add-on should create the environment SENDGRID_USERNAME and SENDGRID_PASSWORD variables for you.
You can verify this with:
heroku config

Finally we then create our actual application code to send email:
package notifiers
import com.typesafe.plugin._
import play.api.Play.current
import play.api.Play
import play.Logger
object EmailNotifier {
  def sendMail {
    val mail = use[MailerPlugin].email
    mail.setSubject("Mail test")
    mail.addRecipient("Joe Smith  <joe@example.com>","sue@example.com")
    mail.addFrom("Jon Doe <joe@example.com>")
    mail.send( "Test email" )
  }
  def sendTestMail {
    if(Play.current.mode == Mode.Prod){
      Play.current.configuration.getString("smtp.host") match {
        case None => Logger.debug("Email mock")
        case Some("mock") => 
Logger.debug("Email mock")   
        case _ => sendMail(participant)

    }
  } else {
    Logger.debug("Email mock")}
  }
}
(In case of this posts cant parse tags replace &lt;  and &gt; with less than and greater than tags;)




This should be all that is needed.


Play! 1.x did have a handy mock email interface for development and testing. I will try and find a suitable replacement for 2.0 and update this post when I do.




For more information





Sunday, 22 April 2012

Play! 1 & 2 command tip

If you love the Play! Framework, you might be like me and have both version 1.x and 2.x installed.

Version 1.x, in my case 1.2.4, is a well established feature rich stable version. 
Version 2.x, in my case 2.0, is a new radically different version, that is still in its infancy but released. 


You might have both installed as you have older projects using 1.x and new in development projects using 2.x, or similar.




With both version the norm is to install them and expose the main executable as "play". So how do you differentiate between which version to use for which project?


OK, it is no rocket science but here is a quick tip on how I do it:



I have Play! 1.x installed in
/usr/local/lib/play-1.2.4


I have Play! 2.x installed in /usr/local/lib/play-2.0


You can add either play folders to the your PATH, e.g. in /etc/environment: 


PATH="blahblabh:/usr/local/lib/play-2.0"


But I simple add them to my .bash_aliases file:


alias play1="/usr/local/lib/play1/play"
alias play2="/usr/local/lib/play2/play"


On top of that I symlink this:


$: cd /usr/local/lib;
$: sudo ln -s play-1.2.4 play1;
$: sudo ln -s play-2.0 play2



With this setup I have to make a conscious decision whether to run Play! 1 or 2, and can switch between the two very easily.

$: play1 help;
$: play2 help




Avoiding cyber squatting failure


A year or two ago I misspelled a domain name of very popular site used by many developers. It came up as "Not Found" so I realised as the site at the time was still quite niche (not any more) cyber-squatters had not cottoned on to it yet.

So I registered a couple of similar domain names with a misspelled vowel. This was mainly as I thought it was quite funny at the time, but also I did not want real professional & cynical cyber-squatters to register them either.

Of course I did not know quite what to do with them so I put them as Google parked domains (AdSense for Domains) and forwarded all emails automatically to the proper domain. If ever contacted by the proper site I would just let them have the domain(s).


End of AdSense for Domains

This Spring Google closed their AdSense backed parked domain offering. And I needed to reflect what to do with the domain names and what options I have.

I do not really want to keep paying for their registration, the ads on them brought in less than £20 every year so not likely to challenge my ethical backbone either.

When some of the domain names expired this Spring, and I was initially just going to let them expire, but then from experience I know domain cyber-squatters will be scanning for expired domain names and pick them up. And they will use them far more cynically and unlikely to ever hand them over for proper usage. So I extended my registrations with those domain names.


Domain parking bad taste

In the end I moved the domains from AdSense For Domains to another Domain Parker.

But this has left a really bad taste in my mouth. I am not really any better than a cyber-squatter. I am profiting from misspellings, although less than the registration costs... The content on the parked domains are really of no assistance to the people trying to reach the proper site. So I cannot leave it as it is.


"Nice" Anti Cyber Squatting 

The best solution would be if there was a friendly not-for-profit community back anti-cyber-squatter service. Offering useful content/redirection. A sort of defensive registrations the community can do on behalf of proper sites. Naturally the proper sites should really register these names themselves, but some cant or wont. But to avoid squatters taking advantage a free service like this would be handy. Of course we/I would still have to pick up the registration cost.


Landing page

But instead I think I might put up a brief comical landing page with a big button to go to the proper site. (However until I actually get round to create that, the domains are still listed with the domain parker.... )



Wednesday, 22 February 2012

Clone local git repository to remote server?

If you have a local repository that you want to clone/copy to a remote server here is how I do it.

Perhaps you have been scaffolding, testing and initial silver bullet development and have realised the project is mature enough to share with others. Or you just want it backed up remotely.

Simple git clone local remote do not work. Destination path for git clone is always local.

What you need to do is on the remote server create a bare git repository:

remoteserver$ mkdir -p /var/local/repos/myproject;
remoteserver$ cd /var/local/repos/myproject;
remoteserver$ git init --bare;


Then add this remote to your local git repository:

localmachine$ git remote add origin \
ssh://remoteserver.example.com/var/local/repos/myproject;


Now every time you want to push to your remote server, you do a normal push:

localmachine$ git push origin master;



Although I would seriously consider using something github or gitorious instead of your own server.



Monday, 13 February 2012

Continuous Deployment via Stack Overflow

I do have an interest in Continuous Deployment/Continuous Delivery. Continuous Deployment goes beyond Continuous Integration and automatically deploys builds to production all the time, not once in awhile. It goes well with agile thinking.

With frequent deployments the delta that can go wrong is very small, the feedback is very quick and the knowledge is fresh and active with the developers so fix turnarounds are immediate. By relying heavily on automated integration testing and DevOps that provide automated deployment the result is quick and painless.

This week I answered a difficult question on Stack Overflow. The question by Emwee was for advice on how to use Continuous Deployment with multitude of inter-dependant systems.

It is a tricky question and I did not have an exact answer. My answer was more along the line of how to facilitate an easier deployment by making the dependency and coupling looser and roll-outs smoother.

I referred to Duck Typing, Feature toggles, evolution scripts, version tables, symlink stage & deploy, and referenced a Hacker News discussion on how Amazon deploys their systems, a video on how Netflix builds releases in the cloud and how at IMVU they deploy 50 times a day.

I also referred to Humble and Farley's book on Continuous Delivery.

There was another reply as well by EricMinick referring to his previous answer to a similar question. Eric does in detail describe scenarios of using promotion builds into different isolated test environments and suggest solutions that his company UrbanCode provides in relation with Jenkins.

In the end Continuous Deployment is a great evolution. But with large enterprises you need to keep your tongue straight. But it is worth the investment.


Friday, 3 February 2012

Multi book reading

I have a habit that I wonder if others do as well. And I wonder if it is productive or counter intuitive.

What I do is multi-book-reading. By that I mean I read several books at once.

How I do this is by having different books in different locations. And also several books in the same location. I have a preference for printed books. PDFs of books are handy occasionally for specific searches, but I really cannot read page upon page on a screen. Kindle may be nice, but I have not yet jumped on that bandwagon.

This multi-book effect is partially due to laziness, as when I am in the lounge I do not want to have to walk back upstairs to my study if I get a sudden urge to read or fetch a book. So I always have a half read book in the lounge. (Currently in the lounge I am re-reading the Kanban book by Anderson).


In my study I naturally have bookshelves loaded with computers books, especially java, and always a couple on my desk. I tend not to read too long in this room (then why is it called a study?), but a quick few paragraph while the pc is rebooting etc a few times a day. The reference books however might be referred to when needed. (Current study books are Seven Languages in Seven Weeks by Tate and Continuous Delivery by Humble & Farley)

On my bed side table there may be a fiction book. I don't really want to think too much when trying to sleep.

At work I would also have a mini library as I also tend to evangelise and lend books to colleagues. So I usually have a half read book or two or three there as well. Again for quick glances while rebooting etc, but also for longer reads during lunch if I go by myself. (Just finished ReWork by Fried & Heinemeier Hansson and Linchpin by Godin)

When I am out-and-about e.g. queuing for the till in a shop, waiting for the bus or at lunch when I forgot a book, I use the Aldiko e-book reader on my Android phone. Being a sucker for offers at O'Reilly for epub formatted books I have quite a few on my phone. Aldiko is excellent, but the small phone format is in essence rubbish for reading books over time. It is good for quick 1-5 minutes reads but no longer. And useless for reference books as the overview of the page is difficult. (97 Things Every Programmer Should Know by Henney is good for 5 minute reads)

For longer commutes by train or plane I tend to bring an actual book. If I do this regularly I should probably invest in a Kindle. (Read bits of Specification by Example by Adzic on the plane recently)

I tend do a lot of quick reads. As a stop gap filler between other events. Maybe work interrupts, maybe it is some element of ADHD, or probably our 6 months old daughter. This means I constantly waste time remembering the context of where I am in the book.

Longer reading periods is usually a holiday pattern. This would then often involve more fiction books. I read the Night's Dawn trilogy[2][3] over the last few holidays.

Some books I might skim read chapters to get a gist of but I know I can revisit them if I need to know those sections in detail in the future. Some books I never finish. I would say a good third or more in library I have not finished or barely read. Some turned out just not interesting, some are just skim read while others are consequences of multi-book-reading and lower down the reading priority list.

So this multi-book-reading means I do read a lot of books. But it also means I do not finish enough books. Not very "kanban" or "one-piece-flow" :)



Tuesday, 17 January 2012

SOPA protest page and 503 redirect

Tomorrow, 18t January 2012, many websites will protest against the US SOPA and PIPA acts. For example Wikipedia will go blank in protest for 24 hours.

For those intending to blackout their own websites (or in the future intend to do something similar, e.g. a more practical "web site temporarily down message" while doing an upgrade etc, here are a few tips:

First of all Google recommend you do not simple just change your website front page to a blank page or similar. As this can have repercussions on your SEO, ie your search ranking. Read more about it in this post on Google+ by a Googler. They recommend a 503 error response instead. Which indicates the site is temporarily down.

So a simple change of index.html is not recommended. Nor is a simple redirect in the html's meta header, nor a plain http 302 redirect. All this can affect your ranking.

I recommend (if using Apache 2) to use mod_rewrite in this manner:

RewriteEngine on
RewriteCond %{ENV:REDIRECT_STATUS} !=503
Alias /stop-sopa /var/www/stop-sopa
ErrorDocument 503 /stop-sopa/index.html
RewriteRule !^s/stop-sopa$ /stop-sopa [L,R=503]


This uses Alias to another folder so that the same message can be used for several virtual hosts. It uses a custom ErrorDocument to display a human readable blackout page. And it uses RewriteRule to redirect all requests to the stop-sopa page (except for request for /stop-sopa so that you don't get an infinite loop).

If you are looking for a page to use as the blackout page, there is a nice github project page for just that. An example can be viewed here.






Saturday, 14 January 2012

Agile project tools for personal/open source projects

Been briefly assessing some online free tools for agile task planning for a few personal FOSS projects.

A physical task board is perhaps the suggestion from the agile purists, however not useful for me (nor my family :)).

At work I often have to use the awful Quality Center. It is good for planning functional testing, but not much else. The user interface is painful, and only works on windows with IE.

But most project I have been on eventually drop it for the more developer friendly Jira by Atlassian. Its UI gets cleaner and cleaner. And is great for Scrum projects since the intergration of GreenHopper. It is however very feature rich which is good and bad, and sometimes quite slow. I recommend Jira for distributed larger organisations. It is however an overkill for my needs.

I have been using Pivotal Tracker for some of my projects for a few years. It is a great tool. For scrum projects it is the tool I would recommend the most. They recently started charging but it is still free for public projects. It is however very iteration/scrum centric and as such not useful for my more Kanbanish time irrelevant requirements.

So I started to look at more tools (and revisit some previous ones).
My requirements are:

  1. Free, as in beer or near enough. $9/month and similar is too much for personal projects unless heavily used.

  2. Agile task board simulation

  3. Not time iteration based

  4. Simple functional UI, but not ugly

  5. Icebox feature for storing tasks/ideas not yet ready for the backlog

  6. Pivotal like Feature, Chore and Bug classification

  7. Limiting WIP

  8. Kanban queues

  9. Simple T-shirt or fibonacci estimates

Not all requirements have to be met.

Here are my initial impressions:

Pivotal Tracker

Time iterative centric.
Looks nice. Clean interface.
No WIP limit.
No kanban queue.
Got Icebox feature
Got Feature-chore-bug classification.
Fibonacci estimates.
Unlimited free public projects.

AgileZen

Kanban style flow.
Looks nice. Clean interface.
Columns can be renamed.
Got WIP limit.
No icebox. Can rename backlog icebox and rename another column backlog.
No estimates
Only 1 project on the free price plan.
FOSS projects can apply for free usage.

Kanbanery

Kanban style flow.
Looks nice. Clean interface.
Columns can be renamed.
Got WIP limit.
Got Icebox feature
Got Feature-chore-bug classification.
T-shirt estimates.
Only 1 project on the free price plan.
No FOSS free plan.

Kanbanpad

Kanban style flow.
Clean interface.
Little confusing UI.
Got Kanban queues.
Got WIP limit.
Got Icebox (the "backlog").
No estimates.
Unlimited projects.
All plans are free.
Permissions are strange. No member can edit and public can only view. Either member view and edit with no public access, or public(anonymous) can view and edit!!

Leankit

Kanban style flow.
Seems very feature rich. Perhaps too many features.
UI a little cluttered.
Tasks seems too much like post-it notes.
Only 1 project on the free price plan.
No FOSS free plan.


ScrumDO

Scrum focused.
Looks nice.
Feature rich.
UI a little confusing.
No Icebox.
No WIP limit.
No kanban queue.
Fibonacci and t-shirt estimates.
10 project on the free price plan.
No FOSS free plan.

Flow

Kanban style flow.
Tasks seems too much like post-it notes.
No Icebox.
Got WIP limit.
Only 1 project on the free price plan.
FOSS projects can apply for free usage.


I may update this in the future when I get more impressions of the ones I use and if I find other tools.


My recommendations depends, but currently they are:

  • For large commercial projects Jira offer features and reports. And can be installed inside your firewall.

  • For Scrum projects Pivotal Tracker offers the most complete package.

  • For Kanban projects, the it depends on your own requirements and taste, but my current favourites are Kanbanery and AgileZen. Kanbanpad's no restrictions on number of projects is also tempting

Wednesday, 11 January 2012

Play! 2.0 in IntelliJ IDEA

Play! Framework 1.x supported creating an IntelliJ IDEA project by the command: play idealize.

However while Play! Framework 2.0 is in beta that command does not work**.

So how do you get your Play! 2.0 project to open in IntelliJ IDEA? There are few different work arounds. Especially regarding integrating sbt.

However I have a quick way. For this to work you need both Play! 2.0 and Play! 1.2.x installed.

Create Play! 2.0 project:
/usr/local/lib/play-2.0-beta/play new helloworld
(I am assuming it was a Java project that you chose)

Rename project folder:
mv helloworld helloworld2

Create Play! 1.x project:
/usr/local/lib/play-1.2.4/play new helloworld

Create IntelliJ project:
cd helloworld;
/usr/local/lib/play-1.2.4/play idealize


Move IntelliJ files to Play! 2.0 project:
cd ..;
mv helloworld/helloworld.i* helloworld2/


Remove 1.x project and rename 2.0 folder:
rm -rf helloworld;
mv helloworld2 helloworld


Now you can open IntelliJ and then go to file/open project, then find and open helloworld/helloworld.ipr.


There will be some issues such as libraries etc but this a good start. For further tips try these suggestions.


** As of 11th of January 2012 it is not present in Play! 2.0. I fully expect Play! to create an idealize, eclipsify, netbeansify etc as soon as 2.0 is stable.

Wednesday, 16 November 2011

Do not rewrite

Just don’t do it

Instead evolve

Many (99.99%) of developers continually insist that whichever application they work on needs a rewrite. As in scratch everything, redesign the architecture and technology choices and rewrite every component because the current choice and state is not ideal. I used to feel and say this as well.

Any code I have written (and even worse if written by others) which I need to update/maintain will look bad after only 6 months. After 2 years it smells of bad architecture design and logic. After 5 years the code base feels like a mess, totally unmaintainable.

This is slightly because technology has moved on, and my skills or rather preferences have evolved. But it is mostly due to that over time any application has been updated due to bug fixes and new features, and the maintainance might have been handed over to other people several times, with different skill level or understanding of the architecture. This bloats the code structure and exceeds what the original and probably quite tidy structure was intended for.

Any further maintenance is more and more costly as it takes longer and/or more people to make any changes to the application. And any innovation is muffled as inspiration is dampened.

Many companies makes a good living when other companies outsources the costly maintenance of these systems.


I used to feel the need to rewrite in every new assignment but not anymore. My advice now is: Please do not rewrite


Why are rewrites a bad idea?


No value

If you undertake a rewrite of your core application it will be very costly for your business, with no initial return on investment. Take 6 -12 months of pure losses and in the end the customer (internal or external) has not received any extra value. Most likely they will have less features as scope was reduced, but rarely any new features. Future performance gains and maintainability is a plus but not of value there and then. So it is a huge hit on fragile budgets or even liquidity of smaller companies, with little to no benefit.

This is the main reason not to rewrite, the business side gains nothing from it, it is a pure expense. Any reasonable business person would not approve it. And any future relationship will quite likely be very damaged.


Never finishing

Another risk is that the rewrite takes too long and either never really finishes or is cancelled. Making it an expensive task with no value now nor in the future.

Also rewrites that span a long time usually suffers from that the decisions made at the beginning start to look outdated and wrong before it is even in production and before any future value is gained.


Evolve


However I do not mean you should not evolve your application. Do not keep status quo either. If it is an important core application, you should evolve it, but not perform a total rewrite.

Instead do it in smaller steps while adding value and features for the customer. This way your product is cleaner and up to date, but the business side does not take a huge hit. This is also much more predictable and less volatile. Which make is less likely the project will be cancelled or worse for smaller companies which could go bankrupt.


How do you go about evolving and not rewriting? This very much depends on your type of application.

(If it is a tiny application, go ahead rewrite it. Just make sure the dependant systems are aware of this).


Refactor

On a minor level, continual process of refactoring, with larger refactoring a frequent acceptable task. This should slow down code rot, postponing the need for larger changes a while.


Modularisation

Eventually your application will need a drastic change. If by chance your original design was quite modular or less coupled with other systems you made evolving smoother. If not, try to modularise your current design.

By being modular you can still perform rewrites, but on much smaller scale, taking one module at the time. So that it is not a 12 months loss, but perhaps 1 month loss. While normal work on other modules can still continue. Much less likely to kill your project/company.


Parallel systems

Instead of a hard one day switch over from the old system to the new system, instead run two systems in parallel for awhile, the old legacy system and the new clean system.


Skinny interface adapter layer

If modularised or not, a cleaner way to evolve and replace sections of your application is to introduce an adapter layer for your interface, and keeping it skinny.

This interface adapter only relays between your system/application and other systems. When other systems only talk via the adapter you are free to change your system without affecting external systems.

More importantly you can now split your system into two systems and relay certain calls to the new system instead of the old, without external systems are aware nor affected by this.

The adapter layer is not always possible, or people are resistant to introduce yet another layer/system, but it really makes your much more adaptable in changing your own architecture, you can move elements to a new system and roll back without external costs.


No logic in adapter layer

Do not ever put any logic in interface adapter. Otherwise you end up with yet another system to maintain. Unfortunately this happens quite often when bad planning and management leads to shortcuts which adds logic in the adapter layer. And then you have just multiplied your maintenance costs. And believe me that logic is then rarely temporary.


ESB

Do not interpret this adapter layer requirement as a reason for introducing an ESB. ESB tend to just camouflage the spaghetti integration code, and often introduce above mention unwanted logic. However if you already have an ESB in place it can perform the adapter layer requirement.


Legacy system becomes adapter layer

Another method implementing the adapter layer is to use the existing system to relay calls to the new system instead of handling them themselves. Internally in the existing legacy system you have to do this anyway on each element migrated to the new system, so you can also allow external system to do this.

It is however cleaner to have a separate adapter layer. This will be easier to maintain as probably designed on newer technology and platforms. And allows you switch off the legacy system when you can. But also makes so less likely for people to just decide to use the legacy system instead.


Topic / feature

An even better rewrite strategy/evolution than per module is per topic / feature. This is an even smaller grained change and less risky. As part of more agile strategies you can move one feature at the time to the new clean system.

With an adapter layer this switching is smooth, but not restricted to an adapter layer. Without the adapter you just have more administration of changing every dependant system for each feature moved.

Feature toggles might be part of this strategy.



New feature -> new system
Update feature -> new system


Every new feature request is naturally going to be implemented on the new system. But a good method of choosing which existing features to move is choosing a feature which there is a requirement to modify. Then not touch it on the old system but instead rewrite and update it on the new system. This is then a good carrot/stick way to ensure the migration is performed. And ensures that the company receive some value for each rewrite.


Do not update old(legacy) system/application

Another very important rule is not to update the old legacy system at all when each feature/module is migrated off it. It is so tempting to make a shortcut and update both system as perhaps you have not introduced an adapter layer or instructed enough external systems to use the new system. This will kill your migration/rewrite.

For this step the leadership of your team needs to be firm about and insist on. Do not end up maintaining two systems (or 3 with the adapter layer).


Ensure legacy system delegates

The legacy system will undoubtedly internally refer to the rewritten and migrate module or feature. You need to ensure the legacy system now delegates internally to the new system. This is the only update you should do on the old system. Otherwise again you will need to maintain two systems, and run the risk of the legacy system and the new system executes slightly differently for the same task, leading to people insisting on using the legacy system.


Kill the legacy system

You need to plan and ensure the old system is eventually switched off. Otherwise you will still need to keep people and skills on how to maintain that system. It may be tempting and erroneously to leave some elements behind that should be on the new architecture.

There may be some elements not needed on new system or unrelated that is left on the old system. But probably better to even move these to another new system than keep the old system lying around and increasing the cost of maintaining that small element as well.


Kill the adapter layer?

Once you killed the legacy system, do you want to keep the adapter layer? Keep in mind you might want to move on from the new system when it starts to rot as well. However it may be tidier and less complicated to kill the adapter layer as well. I would kill it and if needed in the future reintroduce it instead.


Do not stop half way

If for some reason the migration to a new system is stopped, either due to reprioritisation, lack of progress etc, then you are stuck with maintaining not 1 but 3 systems. Many companies end up burned by this and is mostly down to not strong enough leadership/management.


So my point is to rewrite small topics/features then modules, but never the whole application. This way value is introduced along the way without a budgetary black hole.


References

Many, but some from former colleague Anders Sveen




Tuesday, 30 August 2011

Null is okay

Most of the bugs that you find or are reported to you in java applications are NullPointerExceptions [1]. NPE are rife in the beginning of a product’s life cycle, and they never go away.


The consequences are bug fixing which checks if:
  • input is null

  • if interface calls return null

  • if method calls return null

  • if properties and collections are null


So this original pizza ordering method:
public void orderPizza(PizzaOrder pizzaOrder){
  for( Pizza pizza : pizzaOrder.getPizzas()){
    kitchen.makePizza(pizza);
  }
}


Morphs into this:
public void orderPizza(PizzaOrder pizzaOrder){
  if( pizzaOrder != null) {
    if( pizzaOrder.getPizzas() != null ||
        Collection.isEmpty(pizzaOrder.getPizzas())) {
      if( kitchen != null) {
        for( Pizza pizza : pizzaOrder.getPizzas()){
          if( pizza != null) {
            kitchen.makePizza(pizza);
          } else {
            throw PizzeriaTechnicalException(“No pizza!!!”);
          }
        }
      } else {
        throw PizzeriaTechnicalException(“No kitchen!!!”);
      }
    } else {
      throw PizzeriaTechnicalException(“No pizzas in order!!!”);
    }
  } else {
    throw PizzeriaTechnicalException(“No pizza order!!!”);
  }
}

(Or alternativly many “assert pizzaOrder != null” that are useless in production)

Madness, and very messy.

Blinkered coding standard:
This happens when code ownership is not clear, when desire for clean code is not existant, when fixing bugs with blinkers on.
When developers are time constrained or scared of changing code.
When development is outsourced but authority and knowledge is not, so refactoring is avoided or not part of SLA.


Why are we so afraid of NullPointerExceptions?
What is wrong with null?


Null is okay. Null is exactly that. Don't avoid it like the plague, but try to not pass the problem to others. Tidy up your gateway interfaces, trust internally null does not happen. If it does then don't hide the problem.

Do not check for null inside you application. Check for null on your edges:

Validate user interface inputs. “Users” are always causing trouble, but we can not live without customers. Validate the essentials, but remember duck walking, don’t scare your customers with perfection validation.

Validate external 3rd party api interfaces. Probably validate internal api interfaces. Remember duck walking. If an internal system quite close to your application erroneously passes null values, fix that system instead. It is a gray zone however so some null pointer validation is fine. Just don’t complicate your system because of the laziness of another within your team/company.

Do not validate calls between your applications layers. Trust the edges has validated the data. Use unit tests to ensure integrity.

Dealing with NPE/Null?


If you encounter a Null or anything resembling empty when not possible etc: throw a NullPointerException. Don’t sugar coat and masquerade it as something else. An NPE is an NPE! It says what it is on the tin! It is easily understood by all developers, and easily traced. Obviously don’t show that to the end user, but don’t catch it too far down your stack.

If you get bug report about an NPE. Fix the edge validation, factory generation or inside the api call. Do not add a “== null“ condition inside your application.

It is easier to avoid internal NPE by not passing external DTO and domain objects through your own application. If you act on it, it should be your applications objects, not an external one. This avoids complicated null pointer validation.

“Clean Code” book suggest “specific case object” for method calls that may previously return null. I am not a big fan, but it may solve internal api NPEs. I definitely support the pattern of returning empty collections in find* calls instead of null when nothing was found.


How would I deal with Null in the Pizza Order scenario?

PizzaOrder object should be validated beforehand in the interface layer. It is not the responsibility of this method.

Same layer before this method should call a factory method that sets pizzas collection to an empty collection if no pizzas.

Kitchen is always injected or constructed externally to this method and not its responsibility.

Factories in mapping/creating the PizzaOrder before this method call should also ensure no Pizza objects are Null.

The one possible valid check is if there are any pizzas in the order. The Collection.isEmpty(pizzaOrder.getPizzas()) But it is not a null pointer check, as previous interfaces/factories will ensure a collection object exists. However the interface or application before this method call should probably have validated the actual order, to ensure an order contains pizzas!


So Null is okay and NullPointerException can be your friend.

Tuesday, 16 August 2011

Boobs, breast and Tits. Functional or attractive?

A different post than my normal rambling:

Our beautiful baby daughter was recently born, and is in the middle of breastfeeding. And this constant feeding got me thinking of how I was reacting to this exposure.

Basically at the moment I am only thinking of my partners boobs as a utility to feed our daughter. They have purely a functional property.

It is quite the same when I see other mothers feed their babies on park benches etc. Mostly I think it is just sweet or indifferent to it.

So basically this body part which often is connected with attraction and excitement, is by introducing a baby changing my perception completely. And I would guess this reaction is similar with most other males, at various degrees (and probably some exceptions).


But remove the baby and any pregnancy references, if I catch a glimpse of a breast or cleavage I still snigger like a teenager...


Well at least I got to have "boobs" in a blog title, instead of my normal geeky techy posts :)



Tuesday, 3 May 2011

Ubuntu releases

Ps. This is not directly regarding the latest 11.04 Natty release with the infamous Unity UI. (As of 03.05.2011) I am on 10.04 Lucid on my desktop and 10.10 Maverick on my servers (I know, it should be the other way round...), so I have not even tried Unity yet.

Current release schedule

Ubuntu releases a new version every 6 months, in April and October. They are supported for 18 months. I do not really have an issue with this frequency.

Every 2 years one of these releases is a LTS, Long Term Support, version that they will support for 3 to 5 years. This is the release to use for more production systems.

In practice

In practice this 6 month release schedule means a lot of bleeding edge software and versions go into every version. This is good. It means the distribution is up to date, and the software is pushed and cannot rest on its laurels and falter.

Most software versions are stable enough, but however new divergences are often not polished enough and lack enough extensions, documentation, etc. E.g. Unity, Gnome 3, GDM2, Plymouth etc. For the initiated that will research solutions and like dabbling with new software this is not a big issue. For the vast majority of users that just wants something that works, this is risky and often backfires on their impression of Linux and Ubuntu.

Also a 6 months release schedule does mean quite a frequent upgrade requirement. Especially for the uninitiated non heavy Linux fanboys (if such a name exists). But you have to set the line somewhere, and a more frequent upgrade does mean a smaller delta difference and less chance of broken upgrades. On production servers however, 6 months upgrade schedule is a non starter. The LTS schedule is thus more suitable.

LTS

Unfortunately the LTS version is not always the version Ubuntu "promote". After a newer minor version has been released it takes a back burner. Too much emphasis I feel on promoting, discussing and supporting the newer versions and not backporting enough to the "stable" LTS version.

Also when an LTS version is released it is "promoted" as LTS immediately. Being a "major" release a lot of people upgrade to it soon after the release date, but there are still many teething errors before it is more "solid", usually then they release a X.x.1 version.



Small modification

They should promote LTS to everyone, and let fanboys use the latest non LTS versions. This way the keen users will still be up to date, and will ensure fixes and velocity of the features. But they will keep a very stable version for the majority users.

They should not apply the LTS "brand" to the version until the X.x.1 version a few months after the general X.x release. That way teething errors are never found in an LTS version, and more trust can be applied to the version branding.

They should include more backports to the LTS versions. After about a year my LTSes are a nuisance as their packages are too out of date.

Large modification

LTS current biennial release schedule is a large gap. What about a LTS every year or 18 months?

Have a "major", "minor", "tiny" or better "solid", "stable", "unstable" release versioning? Or an unstable release every 4 months(fanboys), stable every 8 (desktops) and solid(servers)(LTS) every 16 months? Too much admin or release confusion perhaps?


While debian's stable/testing/unstable naming is a close match, Ubuntu has always been relible. But lately I have been reluctant to upgrade, waiting months before all initial problems are out of the way before I dabble. And now I usually skip a release or two every time.

Summary

Basically change the promotion of LTS. Let the LTS be the default version. And do not call it LTS until the Ubuntu x.x.1 teething problem bugfix version is released.

And optionally change the schedule to more frequent LTS, or even 3 levels of stability/support releases.