Category Archives: software

Bookmark Synchronization on Firefox Part Deux (With Google)

Thanks Google! (Sarcasm alert, sorta)

Wouldn’t you know it, after I spend time getting Foxmarks working on my own server and then documenting it, a friend sends me an e-mail this morning alerting me to the fact that Google Labs just released Bookmark Sync that will store bookmarks, tabs, cookies, history, etc. in my Google account. I installed it this morning and it’s pretty sweet. I opened a bunch of sites, then closed Firefox and opened it up on another browser and it restored everything from my other browser. It’s actually pretty cool.

The downside…startup time is really slowed down a lot. It syncs everything at startup and even on a relatively fast connection (our 100 Mbps pipe won’t be active until next week) it took 20+ seconds.

Google encrypts passwords and cookies so with a pin you designate that is separate from your password so that’s cool. So far it seems to work pretty easily. I’ll try it out and see how I like it compared to Foxmarks.

At least I have a couple of options now for synchronizing bookmarks between computers.

Bookmark Synchronization on Firefox

Those of you out there that work on multiple computers, for example a work computer(s) and personal computer(s), know how much of a pain it is to keep your bookmarks synchronized. Over the years I’ve built a pretty large library of bookmarks. What makes it annoying is that I’ll add a bookmark on my home computer, for example, then go to work and realize that I need the link that’s on my home computer. What I’ve done in the past is just e-mail my bookmarks.html file to myself but synchronizing it and keeping it up to date is just not a lot of fun.

I could use del.icio.us and while I do have an account on there I don’t want it to be my primary bookmark store. I tend to prefer having my bookmarks be private and on my computer.

I figured their had to be a Firefox plugin that took care of this. I know I’m not the only one to have this problem. I found a few of them including OnlineBookmarkManager and FoxMarks. I didn’t get that far in to the OBM extension because I found that FoxMarks just uses WebDAV to push and pull bookmark files. Perfect! But where would it store them? I really don’t want to store my bookmark file on a 3rd party server, especially one that could go down at any time since both initiatives look more like hobby sites than being attached to a more permenant entity. Plus, there’s just the privacy issues. Enter Apache and mod_dav and mod_dav_fs.

So, I set out to configure WebDAV support on Apache. At least on my distro on my VPS at RimuHosting (tell ’em I sent you if you sign up!) the Apache configuration comes with the DAV modules loaded, although no directories are configured for DAV support. You’ll want to look for these lines in your httpd.conf file:


LoadModule web_dav.so
LoadModule web_dav_fs.so

If they’re not there, make sure the shared objects are present and then add the references to them in the file. If you don’t have the referenced files then you’ll need to do more research to get your server configured for WebDAV support.

After this you’ll want to create a directory in your HTML folder. Let’s just say you call it /bookmarks. Create that directory and then make sure that the user that the httpd process runs as has the ability to write to this directory. For me, it was easiest to just give the apache user ownership of that sub-folder.

Next, you’ll need to configure this folder as a DAV enabled folder. You’ll do this by adding a section such as this to your httpd.conf file:


<Location "bookmarks">
DAV on
AuthType Basic
AuthName "WebDAV"
AuthUserFile conf/passwd
Require valid-user
</Location>

In the section above, DAV on tells HTTP that the web_dav_fs module will be enabled for this location. It also says that authorization will use basic HTTP authentication and to use the passwd file in /etc/httpd/conf/passwd (at least this is where I keep mine) for authorization. The last line is critical in that it tells Apache that a valid user is required to access this location. This keeps the wrong people from reading and overwriting your bookmarks.

The next step is to create the passwd file in your server’s configuration directory. Make sure the user the server is running as has access to the location of this file. I used htpasswd with the -c option so that it would create the file in the directory and encrypt it properly.

Now, you’re ready to go. Go in to Foxmarks and cancel out of the wizard (The Foxmarks menu is in the Bookmarks menu if you’re unable to find it). Set the username and password for the user you are going to use to access the bookmarks. Then go to the other menu and put your server name or IP address in the server field. Next, the path should be the location you set above in the Location field. I went ahead and put my username after the bookmarks directory so that I could patition them out via users if I ever give friends the opportunity to store their bookmarks on my server.

To test it, just click the Maintenance section’s, “Upload Now” button. If you get an error message then check the Apache server’s error log file. That’s configured in your httpd.conf file. If it says password mismatched make sure you have the ability to read the password file you specified. If your password file isn’t where you says it is the Apache error log will tell you that specifically.

I did encounter a synchronization error after trying it out between two servers and this is known error that you can read about here: http://www.foxcloud.com/wiki/Foxmarks:_Error:_Precondition_Failed. There is a procedure at the bottom of this page that discusses a setting you can set in Firefox’s configuration (in the address bar type about:config) that will make this a non-issue. Read the limitations in the documentation, however. Once I had set that property I had no problems synchronizing between my two machines. If you don’t feel like synchronizing you can also just always upload download bookmarks.

I think that should just about do it. Drop me an e-mail or leave a comment if you have problems and I’ll try to do what I can to help.

Mock Objects – How Useful?

I have a feeling the title of this post will probably have people thinking I’m going to bash on the concept of them and that’s not really the case but at the same time I think their usefulness tends to be limited to a relatively narrow set of cases.

Where I Have Used Them

Not in as many places as I thought I would. The main area I’m using mock objects (and by the way, I’m using EasyMock 2.0) is in my code that tests some filters that are based on Spring’s OncePerRequestFilter. It makes it easier to test the code when I mock the HttpServletRequest, HttpServletResponse, ServletContext and FilterConfig interfaces. I provide the data that the Spring filter requires and provides during its interaction with these interfaces based on the particular test details. For example, I’ll tell the ServletRequest mock to expect a call to getServletPath and then give it the response as well as setting the run once attribute setting. I’ll also set some items on the FilterConfig that the filter will need. The filter also needs to set an attribute on the request as well.

The reason that mocks come in handy here is that I don’t have to create dummy implementations of these larger interfaces just for testing purposes. Also, the interaction with these interfaces I mock is pretty simple. So basically, they’re great when you have code that interacts with large interfaces but where the interaction is generally limited. More complex interactions would make the mocks, and hence the tests, much more brittle.

Mocks in general work pretty well as long as you’re using interfaces. Easy mock has some features for mocking concrete classes but I haven’t used that feature so I can’t comment on it. They also have some strong limitations, IMHO, that I think should be aware of before going too far with them.

Why Don’t I Use Them More?

There are a number of reasons, but the first that always gets me is I don’t always know how code is going to call the interface if some of my code relies on code I didn’t write or maintain. It means I have a lot of iterations through test cycles before I know how to fully script the mock. Sure I could read through the code but that’s an even bigger pain than the approach I take (at least to me it is). Take my prior example for instance. I had to give the mock to the filter and see where it failed first. If the first call it makes is to the FilterConfig then that will fail. Then it gets past that and I see it fails when it asks for an attribute in the request. And this continues on until I get it right. But that part that gets me is that I’m trying to test my filter, which in this case is a redirect filter, but I’m really doing a lot of work just to get the code I rely to get to the point where I can test my redirect filter’s capabilities. What happens if they make subtle changes in their code but the external functionality no longer changes. My tests break without functionality actually breaking because the use of the interfaces that are mocked no longer respond as they should.

To be fair, if I had created dummy implementations of those interfaces they might not fare a whole lot better since methods I don’t think are going to be called will probably cause the test to fail later on as well. Maybe for common interfaces such as the ones I use above it would be handy to have an implementation that I could easily instantiate that is built for testing interaction with the interface. In general, using code that you didn’t write and then having to test your code on top of that code, will probably be a pain no matter what.

Another reason I don’t use mocks as much as I originally expected to is because in general most of my services are built with testability in mind so I don’t need to mock up a lot of interfaces to test them. I can feed them data and they just work with the test data. Mocks tend to work best in cases where I have to deal with external systems more than anything and there aren’t suitable classes available that take the necessary test data for the code to function the way I need it to for my tests. Overall, I’d really rather have my unit tests run through real code where possible vs. mocks because then I know the interaction works where as a mock could provide a disconnect with the real system in some cases.

I suppose I’m lucky that with my personal projects I have the luxury of building code for testability in the first place so I don’t have cases where I have no choice but to mock up a more complicated interface. In my work projects most of my code deals with code generation and persistence so mocks really aren’t all that necessary for my work. How are you using mocks? Are there scenarios I don’t have to deal with that you’ve come across and where they’ve really simplified your testing life?

Interviewing Job Candidates

I started out writing a bit about mock objects but this topic has been on my mind lately so I’ll cover it first. Plus, my mini rant about the proliferation of long resumes seemed to hit a nerve.

I’m curious how others approach their interviewing? Do you go hardcore on the tech questions? Or ask specific questions about their experience? Maybe you look for aptitude more than direct relevant experience?

I’m been trying to hone my approach lately. In some ways my approach has to been to give the developer open ended questions and see if they answer them or hang themselves, to borrow a phrase. Basically, I start asking questions about decisions they have made at previous jobs and projects that they list on their resume. The first thing it tells me is if they actually worked on the projects I find interesting. You’d be surprised at what I come across. I had one guy spend almost half a page talking about these business processes that he built so I asked him how they were configured (i.e. XML or some other metadata) and what tools he looked at to help him solve the problem. He immediately fessed up that they were just hard-coded Java classes and weren’t really workflow. That pretty much sunk his interview, although he wasn’t doing great up to that point anyways. The good candidates tend answer the questions pretty quickly and don’t go overboard with their descriptions. They show they know their stuff and then tend to ask questions back about what other specific things I’d like to know. Bad candidates usually go off on tangents and never get around to answering the question. I don’t stop them, either. I want to see how far they’ll go.

The next step after these types of questions tends to be specific questions about tools that they have listed knowledge about that I have strong knowledge in. For instance, if they’ve listed Hibernate experience and claim their skill level is like a 6 or 7 out of 10 I ask specific questions about the difference between the session and second level caches. If they know that then I go in to questions about how they’ve configured Hibernate. Then maybe on different ways to model a many-to-many type of relationship. I like to dig and see if they’ve really used the tool and then pose what-if questions to see if they understand the tools they use as well.

Last (hey, I only usually have an hour with them) is the coding challenge. We make candidates code. Lately I’ve been using a challenge to make them write a random weighted list implementation. It’s basically a weighted round robin algorithm implementation. The code for it is actually quite simple. It’s maybe 8 real lines of code if done correctly…something like that. The first thing it tells me is if they can think through a problem and come up with a simple solution. The last thing I want is for them to start inspecting return values and insuring a specific statistical return percentage based on an item’s weight. That’s just ridiculous. The second thing it tells me if they read requirements. I give very specific requirements and in some ways make it easy if they would just read and think for a minute or two before starting their response. Most don’t. I then look at the solution. If they do things like make the class immutable to keep the implementation simpler then they get bonus points. Even more if they can coherently explain why they did what they did as well. Hey, they might have seen the question on the Internet and are good at regurgitating without understanding.

So anyways, after using this approach for a while I’ve started to ask a little more Java related skills to see if they understand the language and programming concepts in general. It seems that folks that know the little things tend to be developers more interested in their craft as well. My latest test (and I came up with this when discussing the answer to the first question with a candidate) is to give them a simple Java class that is mutable. It has a default constructor and a getter and setter for an attribute. I then tell them to modify the class to be immutable. No sub-classing or anything like that. Just change the code. Again, I get some very strange looks and even stranger responses. It’s kind of boggling that people with a good deal of experience don’t even understand the concept of immutable or can even describe its benefits.

That was longer than I expected. Well, if you guys have any ideas or tests you like to use, drop me a line and let me know what you do and more importantly why you ask and what their answers tell you. That’s more important than the question in my opinion.

Using Rome to Generate RSS Feeds

I wanted to add RSS feed support to my project and ended up choosing Rome. I did this a while back so I honestly don’t remember the other toolkits I looked at. I do remember posting something and getting feedback pretty quickly from one of the Rome authors so I figured that the project was still pretty active.

It was pretty easy to generate a feed with the toolkit. I know that some people probably use it in more advanced ways or have more complicated needs but I just needed to be able to pass something a list of objects and have it spit out a feed in a particular format. Nothing too fancy.

Here’s a quick run down on how simple it is to create a feed:

 SyndFeed feed = new SyndFeedImpl();
 feed.setFeedType(format);
 feed.setTitle(title);
 feed.setLink(link);
 feed.setDescription(description);
 
 List entries = new ArrayList();
 for(Object item : listOfItems) {
     entries.add(entryAdapter.createSyndEntry(item));
 }

 feed.setEntries(entries);

In the code above the items such as format, title, link and so on are passed in to this function. The entryAdapter is also provided and acts as a translator to move data from a domain object(s) to a syndication entry (SyndEntry). There is generally one entry adapter implementation for each type of object I make availabe in an RSS feed.

The next step is to send the feed to an output target. In most cases you would just set this to be the servlet’s output stream. Here’s the code for that. Assume the Output is also provided as the writer instance.

 SyndFeedOutput output = new SyndFeedOutput();
 try {
     output.output(feed, writer);
 } catch (FeedException e) {
     throw new RuntimeException("Failed to write output", e);
 }

Rome supports multiple formats including all RSS versions and ATOM 0.3. I’m not sure if I’ll need more of the framework but they made it pretty easy to export feeds so it passed my first test. The next step is possibly caching feeds for popular requests so that I don’t have keep retrieving the necessary data and generating the feed.

What’s Up With Huge Resumes?

What’s up with huge resumes these days? The company I work for has been hiring lately and so I usually end up interviewing one to two people a week. Lately I’ve been seeing huge resumes from developers. I mean like 9, 10, and 11 pages for guys with maybe 7-9 years of experience. I have 12 years of experience and my resume is at 3 pages or so. It might have gotten in to 4 but I’m going to start removing stuff from jobs beyond 10 years I think. They’re not relevant and just make the resume go on and on.

Anyways, back to the relevant stuff. These resumes just go on and on (and ON) with mind numbing detail such as, “Configured log4j properties files”. What??? Why would I care about that. Great, you can use log4j, I must want to hire you now. But seriously, they put way too much detail about what they have worked on and I want to take a nap before I get through 2 years of experience. I just want bulleted items of the important things they actually did and/or were responsible for. These people are also the ones that didn’t run spell check on their resume usually. Hint: Education is not spelled with two Ts. What’s worse is that most of these resumes come from recruiters. I actually think it’s the recruiters that tell these people to fluff up their resumes. The recruiters should get dinged for not even proof-reading the resumes. Or at least lose a few % of commission. It’s just pathetic.

And last, people, get your resume right! Contrary to the belief of some, Struts is not a methodology. Neither is UML. UML is a modeling language, folks. The RUP is a methodology.

Another fun little tidbit: In Microsoft Office, if you type “JBos” instead of “JBoss” it will correct it to “jobs”. Proofread please!

Unit Test Code Coverage With Emma

Code Coverage Tools

While unit testing is a good start and it’s admirable to try and insure all your unit tests actually pass, it’s somewhat useless if you don’t know how much or what part of your code base doesn’t benefit from unit testing. Even the most committed developers of unit testing or TDD will miss covering some portion of their code. How do you try and prevent this from happening? Enter code coverage tools. The best known is probably Clover from Cenqua but for those of us working on our own home grown projects or that just like to use open source tools there is Emma.

Works Great!

I’ve been using it for MyThingo and it’s been very helpful. I’ve uncovered a few bugs that I didn’t know existed and hadn’t been uncovered by my unit testing up until now. Emma works by instrumenting your code and then you execute your unit tests using the instrumented class files. At runtime you set variables that tell Emma were to keep it’s coverage database and then after your tests execute you generate the report from that data. It seems to work very well and has a pretty comprehensive report of coverage including views of all your code with color coding to show which lines have and have not been executed during your tests.

Shortcomings

I have to admit that I miss the spit and polish that Clover seems to have. For example, Clover tells you how many times a line has been executed. While not critical it is a nice to have and I would expect this data is available in clover but just not exposed. If I get some time I may look into the report generating code and see what’s available. The integration of Clover in to Eclipse is also a very nice to have feature but again, isn’t critical.

Conclusion

Code coverage is one of those things that isn’t talked about much in unit testing circles. Okay, it is at times but not many developers I have worked with over the years really pay much attention to it. It is a critical part of your code testing infrastructure, however. You can’t just accept the fact that your unit tests pass. You must know how much of your code base is actually tested and then make sure that the parts that aren’t tested aren’t necessarily critical. It’s something I’ll harp on but in addition you have to make it brain dead simple. It has to be part of your build and not require any special attention by developers. It has to be part of your nightly build so reports are kept up to date as well. And finally, someone actually has to pay attention to the results and encourag developers to increase their coverage scores. If you do all this I think you’ll definitely see an improvement in the quality of the code that is delivered to QA and your users. And just think, better code going in to QA tends to mean shorter QA cycles which means faster delivery.

Transferring Eclipse Configurations

One of the pains of working with Eclipse RC releases is moving from one to another. I don’t like using downloaded plug-in packs such as WTP, Mylar, Spring IDE, TestNG, Subclipse, etc. because they get updated pretty frequently, especially to keep up with changes in the RC releases. But I also hated having to keep track of all the update site URLs that each of them has. Then, someone says, “Why don’t you just export your sites and then import them when you drop in a new Eclipse RC?” Doh! I have no idea why I missed that button before. Sometimes I just don’t look at the UI I’m using.

Using Hypersonic In-Memory for Unit Testing

Persistence Testing

One of the things that is always a pain when unit testing is when you need to test functionality that utilizes persistence. The unit tests that are easy are the ones that just test the functionality in your POJOs that doesn’t require persistence to test. Then there are the tests you want to write to make sure that your persistence configuration is correct. For example, you want to make sure that you’ve got your associations in Hibernate configured properly or that changes to data model don’t break things when you go to save. I’ve made mistakes before where I added a property that was required but some of my code didn’t set that property. It’d be nice to know that by having a unit test tell me that saving the object fails in certain situations.

The difficult part in this is getting a database configured. No one really wants to have one or more databases that are just for testing. It’s a pain to maintain and is going to be slow. When you unit test most times you are going to want to start with a clean slate and then populate with the data that is necessary for your tests. All of this overhead for a test? Not Ideal. What are some options?

Enter Hypersonic

Hypersonic is a Java database that can be run either on disk or in-memory. The in-memory is what we’re most interested in for testing because it doesn’t require a location on disk to run, is extremely fast since it doesn’t require disk access and is compact.

What I do is have a separate hibernate.cfg.xml file for unit testing that is identical in most regards to my production hibernate.cfg.xml except for the fact that it uses a straight JDBC connection vs. a DataSource reference for its connection source. Additionally, I tell Hibernate to create the database schema upon startup. Here is an example of some of the entries I set:

     <property name="hibernate.connection.url">
          jdbc:hsqldb:mem:testdb
     </property>
     <property name="hibernate.connection.driver_class>
          org.hsqldb.jdbcDriver
     </property>
     <property name="hibernate.dialect>
          org.hibernate.dialect.HSQLDialect
     </property>
     <property name="hibernate.connection.provider.class>
          org.hibernate.connection.DriverManagerConnectionProvider
     </property>
     <property name="hibernate.hbm2ddl.auto>
          create-drop
     </property>

The key here is the JDBC url, jdbc:hsqldb:mem:testdb, which tells Hypersonic to run in-memory vs. on disk and then the setting for hibernate.hbm2ddl which is create-drop and tells Hibernate to perform automatic schema generation.

I also use Spring as my primary application container/framework and so what I also do is have an applicationContext.env.xml for my normal application database environment and then a testApplicationContext.env.xml for testing. The main difference is the normal one configures the Hibernate SessionFactory with the production Hibernate configuration file and the test environment file configures it with the test Hibernate configuration. When I go to run unit tests that need both Spring and Hibernate I will use the test application context in place of the normal one. It’s pretty easy to do since I have a base class used for testing that will build out the application context during startup. It does this by just listing all of the Spring configuration files during startup and passing them in to the appropriate ApplicationContext sub-class.

Give this approach a try if you’ve been struggling with unit testing that involves database operations. It has saved me a lot of time and headache since I started using it. You can send me an e-mail if you have any questions about this.

Next up, code coverage with Emma.

Unit Testing with TestNG

A while back I started working on a project in the evenings that turned in to MyThingo. I decided from the get go to make sure I built exhaustive unit tests for the codebase. At the beginning I was using JUnit but I had read about TestNG a few times so I decided to give it a try. It was an amazingly fast transition. If you want to give it a try, just follow the documentation that comes with it.

After I started using it the little things that it does better are what kept me using it. Here are a few of the things that I use a lot:

Annotation Based Configuration

Rather than having to use a naming convention or extending a particular TestCase I can just mark a method as testable. I don’t need to have all my methods start with “test”. That gets annoying. Here’s an example of how it looks:

@Test
public void verifyAddition {
    assert 4 == Number.add(2,2) : "2 + 2 should have added up to 4";
}

Obviously the unit test above is not real but it shows how you mark a test method and that you don’t have to name it testVerifyAddition. The class this method is in is just a standard POJO as well.

Another annotation that comes in really handy is the ExpectedExceptions annotation. This tells TestNG that the method should fail unless it receives an exception of the specified type. Here’s the difference between the normal way of testing for a particular exception in JUnit and one in TestNG. First, the JUnit way:

public void testForNullArgumentException() {     
    try {         
        new Foo.doMethod(null);
        fail("I should have received a NullArgumentException");     
    } catch(NullArgumentException e) {         
        return;     
    } 
}

Now the way it’s done in TestNG:

@Test
@ExpectedExceptions(NullArgumentException.class)
public void nullArgInMethodX {
    new Foo().doMethod(null);
}

A little simpler, heh? None of what I’ve shown is impossible in JUnit (obviously) but overall it just makes testing easier. And the key to getting folks to unit test is to make it as painless as possible.

More Flexible Execution Profiles
One of the areas of JUnit that can drive a person crazy is that the class is dumped and loaded for every test. That means you have to setup and tear down after every test method. The idea is that you start with a clean slate for every test. But in the real world that’s not always ideal nor is it performant. Sometimes I just want to initialize items for the entire test class and other things for every method. TestNG lets you do this.

The way you do this is with the @Configuration annotation. This annotation can be added to methods and then its options allow to specify how the annotation affects the test class. For example, the beforeTestMethod parameter tells TestNG that the method should be run before each test method in the class. With beforeTestClass the method should be executed once at the beginning of the test run for this class. There are other options you can find in the TestNG documentation but these are the ones I use the most.

The last item I thought I’d go over is the ability to tell TestNG that a method X depends on the successful execution of other method(s) and if the dependent method(s) fails then there is no use in executing the test method in question. This can cut down on additional test execution times.

One of the issues I’ve had in getting other developers to unit test is that it’s usually just a pain to do tests beyond simple unit testing. You have to face facts that some developers just don’t care about it as much as you do. The key is making it as painless as possible and to make it relatively easy to mimic the environment that the code is going to run in. I’ve covered some of the areas where TestNG helps accomplish the simple/easier/faster part and in a future post I’ll cover some of the other areas such as setting up your Spring environment in conjunction with Hibernate using Hypersonic and auto-created database schemas using in-memory database. Another topic will be the use of easy mock for testing features that would normally require a container such as Tomcat. If you’re interested in those, leave me a comment or shoot me an e-mail and I’ll let you know when those are up and ready.