Cygwin version of Ruby is currently broken

Apparently the version of Ruby in the current Cygwin Ruby package has a bug that breaks the latest version of Rails.

I decided to play round a little with Rails 2.0 the other day.  When I ran the “rails” command to create a new project, I got an error message about /dev/urandom (sorry, I should have written it down).   After doing a few google searches, I discovered it’s apparently a bug that’s been recently fixed in Ruby.  (see this bug report).   Unfortunately, the Cygwin package of Ruby still hasn’t been updated to the fixed version.

If you run Ruby under Cygwin, here’s a workaround until there’s an updated Cygwin Ruby package:

1) uninstall any gem’s you’ve installed (run “gem list –local” , then do a “gem uninstall” for each one.)

2) use the Cygwin setup application to uninstall the Cygwin version of Ruby.

3) while you’re in the Cygwin setup application, make sure you’ve installed the openssl-devel package.  If you don’t, Ruby will appear to compile correctly, but it’ll be missing crypto support that’s necessary for the latest versions of Rails.

4) download the latest stable snapshot release of the Ruby source.

5) untar the Ruby source to the directory of your choosing (I recommend /usr/local/src).

6) follow the instructions in the included README to install.   (./configure; make; make install)

7) Download the latest version of Ruby Gem

8) untar to the directory of your choosing (I still recommend /usr/local/src ).

9) follow the instructions in the included README to install ( ruby setup.rb)

10) use gem to reinstall any gems you need, like rails.

It actually takes a lot less time to do than it sounds like it would.  Hopefully there’ll be an updated Cygwin package soon, but until then, this is the best solution.   I did try to use the Ruby One-Click Installer for Windows, but using that under Cygwin is just an enormous pain in the ass.

Rails breakpoints broken?

Arghh… So I decided to do a bit of Rails programming today. While I was trying to figure out a small problem in my program, I decided to fire up the good old Rails breakpointer command only to get this error message:

Breakpoints are not currently working with Ruby 1.8.5

WTF? That’s it? No message about what the recommended alternative is? Gee guys that’s professional.

Luckily a few minutes searching on Google led me to a discussion of a reasonable alternative.

Hurray for Google searches! However, even thought it all worked out in the end, incidents like these cause me to wonder if people who claim that Ruby and Rails is still to immature might not be clueless luddites.

I won’t even start on discussing what a useless, outdated, spam-overrun waste of time the Rails Wiki has become….

Google ported Rails to JavaScript?

Apparently Google developer and blogger Steve Yegge has ported Rails to JavaScript.  I guess this this means we know what language Steve was referring to in his essay “The Next Big Language“.  This news also sheds some light on what Steve’s rather odd allegorical story “The Old Marshmallow Maze“.  It seems clear that the “floating platform” in the marshmallow maze story is his Javascript version of Rails.

I wonder if Google will release the source to this project?  I know I’d love to see it.  I like the idea of using the same language for both the client and server side.  And Javascript really isn’t a bad language.  Most people’s negative impression of Javascript stem  from the poor development environment available in the browser and the massive pain-in-the-ass that is the DOM.  Javascript is not the culprit, but it usually gets the blame.

But even if it is released publicly, will Javascript on Rails take off?  I don’t know.   There’s been at least one other attempt to create a Rails like framework in Javascript (TrimPath Junction) and  frankly, I’m not that impressed with the code sample listed on the TrimPath Junction web site.  Javascript just isn’t as concise and readable as Ruby.

Five things you can do to prevent your IT projects from failing.

Here’s some depressing news: The majority of IT projects fail. They either fail outright, come in wildly over budget, finished much later than planned, or don’t deliver the business value that was originally planned.

Anyone who’s been it IT for very long has certainly seen more than a few such “less than perfect” projects. While it’s depressing to see confirmation of just how bad the situation is, it’s also a bit of a relief to know the situation is universal and not just local.

The good new is, it’s just not you, and it’s not just your team. Industry wide, the percentage of “successful” IT projects is quite low. How low? In 2004, only 29% of US IT projects were successful. That’s according to the 2004 Standish Group Chaos Report, the largest and most comprehensive research report on IT project success in the US. Of the remaining non-successful projects, 18% were abject failures. They were either canceled or delivered no value. The remaining 53% were considered “Challenged”. These were projects that delivered *something*, but were either overdeadline, overbudget, delivered fewer features than planned, or a combination of the three. Other surveys and studies have reported similarly bleak statistics.

So what can be done? How can you prevent your projects from failing? Here are 5 things that can dramatically increase the odds that your IT project will be a success.

1. Keep your project size/duration small:

According to the Standish Group, the three key metrics for predicting the success or failure of an IT project are: project size, project duration, and team size. Simply put, size matters, but not in the way you might think. Big is bad. Small projects are much more likely to succeed. Consider the success probabilities in the chart below:

IT Project Success by Project Size

I’m still not excited by the slightly better than 50/50 odds for projects under $750K, I like it a lot better than the odds if you’re stuck on a $10Million or larger project. You’re going to literally need a miracle for your project to succeed!

Limiting the size and duration of your project is the #1 thing you can do to make your project successful. It’s so important that the next two items in this list are ways to help achieve this goal.

2. Use Agile development methods:

Agile processes are listed as #5 in the Standish groups list of Top 10 Reasons For Project Success. Personally, I think it should be higher on the list. If you develop and deliver incrementally, each iteration can be treated as a close ended “mini-project” that delivers immediate value to the users and enabling feedback from the users than can guide the next iterations development. This feedback loop is one of several ways that agile development keeps the users involved throughout the project, which incidentally is #1 in the Standish groups list of Reasons for Project Success.

Agile development methods also emphasis integrated testing methods, like Test Driven Development (TDD), and Continuous Integration. Integrating testing throughout the development process lets you catch and fix bugs earlier. Under the old waterfall project methodology, QA testing and final project integration was one of the riskiest parts of an IT project and was often where the project deadline and project costs would spiral out of control. Finding and fixing bugs and integration problems early allows problems to be discovered earlier when they are less costly to fix. This is similar to the “Do it right the first time” rule used in Lean Manufacturing.

3. Leverage a Service Oriented Architecture(SOA):

I’ve posted here before about some of the advantages of an SOA. As I stated in that post, SOA enables you to reduce the size of IT projects by allowing you to reuse functionality from other IT systems. For example, if one project has already exposed something like sales tax calculation or credit card processing as a service, future projects won’t have to write code to handle those functions. They can simply call the existing service. Less code to write/test/debug means a smaller project and less risk!

SOA can also helps reduce the size of a project by giving you new ways to divide up a big project into multiple smaller projects. Often a seemingly large, monolithic application can be re-designed into a number of services and much smaller application that consumes those services.

Here’s what Werner Vogels, the CTO of Amazon.com, had to say about Amazon’s use of SOA in a recent ACM Queue interview:

We went through a period of serious introspection and concluded that a service-oriented architecture would give us the level of isolation that would allow us to build many software components rapidly and independently. By the way, this was way before service-oriented was a buzzword. ….

If applied, strict service orientation is an excellent technique to achieve isolation; you come to a level of ownership and control that was not seen before. A second lesson is probably that by prohibiting direct database access by clients, you can make scaling and reliability improvements to your service state without involving your clients.

4. Use open standards and commodity software/hardware:

The history of IT has been a steady march of open standards displacing proprietary standards, and inexpensive commodity components displacing expensive proprietary components. How many SNA or IPX networks have you seen recently? They’ve all been displaced by IP networks, just as proprietary mainframes and minicomputers have mostly been replaced by more open Unix servers, which themselves are now being replaced by Linux servers running on commodity Intel hardware. Remember the old proprietary online services like Genie and Prodigy? Where are they now? Gone. Wiped out by the open architecture and commodity protocols of the Internet.

Open always wins. Open standards and commodity components are less expensive, more interoperable, more flexible, have greater vendor independence, and are just plain less risky than proprietary alternatives. Next time a slick sales rep tries to convince you to adopt a proprietary standard instead of an open alternative, ask your self whether you’d about to buy into the next Internet, or the next Prodigy. I know which side I’ll be betting on.

5. Find good people for your team and treat them well:

Programmers, sysadmins, and other technical team members are not interchangeable. This should be common sense, but for some reason people continue to get seduced by the idea that a wizbang new product, new language, or new development methodology will allow their project to be successful with a team of low payed, low skilled numskulls. It doesn’t work like that, and it never will.

For a project to be successful, you need a team with the appropriate skill levels and you need to provide them with a productive environment to work in. Studies have proven that factors like level of experience, tool familiarity, and motivation exert a massive influence on productivity.

Standish group lists “Skilled Resources” as #8 in their reasons for project success, but I would go a bit further and say that having skilled people is necessary for success, but not sufficient. You also have to put them in the right environment where they can be positively motivated, and most importantly NOT INTERRUPTED!

That last recommendation might seem strange to a lot of people, but it’s true. Most technical tasks require flow, which is what some people call “being in the zone”. Interruptions like phone calls, meetings, email or noise from adjacent cubicles break concentration and break the flow state. One study showed that developers with quieter workspaces and fewer interruptions were more than two and a half times more productive than other developers!

Interruptions aren’t just a problem for technical workers either. Another study showed that interruptions from email and instant messaging lowered effective IQ points more than smoking marijuana! Clearly this is not conducive to project success!

Building successful IT projects is hard because IT projects are fundamentally unlike any other kind of engineering, manufacturing or construction projects. Projects in the non-IT world often involve building a copy (perhaps with minor changes or tweaks) of something that’s been built before. This almost *never* happens in software! Unlike bridges, cars, or kitchen cabinets, software can be copied an unlimited number of times at the touch of a button.

IT projects are always experimental. That’s why they’re complex and risky. That’s why so many of them fail. That’s why it pays to have the best, most experience team you can afford. And that’s why it makes sense to keep the experiment (your project) as small and self contained (loosely coupled) as you can.

Are design patterns a code smell?

Are design patterns a code smell? Are design patterns bad? That’s the rather bold claim made by Stuart Halloway in this post over at Relevance. At first blush, Stuart seems to have a pretty good case. As he points out, most if not all of the design patterns in the classic “Gang of Four” book are tightly tied to a particular family of programming languages, languages similar to Smalltalk, C++, and Java. Many of these patterns supposedly disapear or become trivially simple in more powerful programming languages like Lisp or Scheme (or I’m assuming, Ruby).

That may be true, but the conclusion drawn, that design patterns are a code smell, is just plain wrong. Design patterns, like any tool, can be misused. And the misuse of a tool is most definitely a code smell. Many of the design patterns in the GOF book probably are inappropriate for use in a functional language like Lisp or Scheme. However, I think an expert on either of those languages could easily write a book about design patterns in functional software design! Think of all the patterns that exist around when and how to use recursion, when and how to use macro’s, the use of higher order functions, etc.

C# from a Java Developer’s Perspective v2.0

 Here’s a great resource for anyone who ever has to switch back and forth between Java and C#.  Dare Obasanjo has written a comprehensive comparison/rosetta stone for the two languages called C# from a Java Developer’s Perspective v2.0. Even if you’re firmly in one language camp or the other, it’s interesting to see how certain things are done by the “opposition”.  Definitely worth bookmarking.

Learning Emacs – part 2: the emacs user interface

In part 1 of this series, we looked at installing emacs. This time, lets get acquainted with the emacs user interface.

The emacs user interface

Below is a screenshot of an average emacs screen with several noteworth elements labeled.

Emacs screen

The first two elements of note are the menubar and toolbar. These are pretty much the same as what you’re used to from any other GUI application, but don’t let that fool you into thinking that emacs works like any other GUI application! The first rule for understanding and learning emacs: Emacs is old. It’s been around since the pre-historic days of computing, and learning to use emacs will sometimes feel like a journey back in time. Gnu Emacs comes from a time before graphical user interfaces were common. Mouse support, menus, and limited graphics capabilities have been grafted onto it over the years, but at it’s heart, emacs is still a text mode console application. You have to understand that to ever understand emacs.

Emacs is a relic from another era. You have to accept or reject it on it’s own terms. It is not, and will probably never be more than superficialy similar to any of the other applications you probably use. In a later installment, I’ll show you how to disable the menubar and toolbar. I know that sounds crazy, but IMHO it’s essential that you force yourself to learn emacs on its own terms.

The next noteworthy UI element is the minibuffer. It’s the seemingly blank line at the very bottom of the emacs window. The minibuffer is where you’ll enter longer commands, and where you’ll enter arguments to commands that require arguments. A simple example would be using the C-x C-f command to open a file. The minibuffer area is where you’ll be prompted to enter a filename. (if you don’t know what C-x C-f means yet, don’t worry. We’ll go over that soon.

The final major UI element we’ll talk about is the modeline. The modeline contains a number of useful pieces of status information, including the name of the current buffer (usually the name of the file being edited in the buffer), the current modes (major mode and minor modes), the current position of the cursor in the buffer, amount of the buffer displayed on screen, etc.

That’s it for this installment. Next time we’ll start getting hands on with some basic commands including cursor movement, insertion and deletion, etc.

______
Other installment in this series: part one, part two, part three, and part four.

Why does outsourcing work?

There’s an interesting article in the MIT Technology Review about Charles Simonyi. Simonyi a former programmer and software architect at Microsoft and at the fabled Parc Place labs, is best known (or at least most infamously known) as the creator of hungarian notation. This particular paragraph in the article caught my attention:

This was the backdrop for Simonyi’s 1977 dissertation, “Meta-Programming: A Software Production Method.” Simonyi proposed a new approach to “optimizing productivity,” in which one lead programmer, or “meta-­programmer,” designed a product and defined all its terms, then handed off a blueprint to “technicians,” worker-bee programmers who would do the implementation.

To me, that sounds very much like the process of outsourcing or offshoring. In most companies that outsource their programming, an application architect or team of architects will create a massively detailed specification (the “blueprint”) that is handed off to the “worker bee” (often offshore) of the outsourcing company.

Finding out that the basic workflow for outsourcing software development is 30 years old is certainly an interesting bit of trivia, but in my opinion the more interesting question is Why do we need the “worker bee’s” at all?

I’ve read someone (I can’t remember now who) say that the problem with the Big Up Front Design model of programming is that the spec and the code are both composed of the same raw material (text). This is radically different from building a house or a bridge, where the spec is much more mailable then the eventual building material.

Why then do we continue writing specs? Why do we continue to turn these specs over to armies of oversees programmers? The specs, architectural documents, and requirements documents for most corporate software projects are mind numbingly detailed. Why do we need the worker bees?

I believe it’s because the current programming tools and languages don’t offer a high enough level of abstraction. If we were, then the difference between writing a details BUFD spec and writing the actual application should be relatively trivial. In Simonyi’s model, and to some degree in the model used by every company that outsources their development, all of the important details are worked out and specified by the “meta programmer”. So what’s left for the “worker bee’s”? Unimportant details? Fleshing out the implementation of the important details?

If fleshing out the implementation is an unskilled, mechanical task, shouldn’t we be letting the computer do that? Isn’t that what computers are for?

Outsourcing, particularly offshoring, is a temporary aberration. A symptom of the stagnation of programming tools and languages.

Learning Emacs – part 1: Introduction, entering emacs, and exiting emacs

So I’ve decided to learn Emacs. I’ve always been more of a Vi man previously, and I still consider some moderate amount of comfort with Vi to be a necessity for any Unix sysadmin. I’ve also used a few of the major IDE’s for programming: I use Eclipse with RadRails for all of my Ruby development, a friend who’s trying to get me to learn Java has introduced me to NetBeans, and I’ve played a bit with C# in Visual Studio.

So why learn Emacs? Well, there are a so many people on the net who swear by Emacs, including some who otherwise seem to be very sensible and non-masochistic. 🙂 It’s also supposed to be the environment of choice for Lisp and Scheme programming, which is something I’d like to learn more about. So, operating under the principle that “where there’s smoke there may be fire”, I’m going to give Emacs a chance and see if I can learn enough of it to at least stand it. 🙂

I’m starting with installing Gnu Emacs under Cygwin. I won’t go into details on how to install Cygwin, as I think that’s covered quite completely elsewhere (specifically in the Cygwin documentation), although I may do a later post on how I’ve customized my Cygwin environment to make it more comfortable and more usable. What am going to do, is document my experiences learning Emacs, and what I’ve learned.

Under Cygwin, emacs is installed in the usr\bin directory under the cygwin directory (on my system “c:\cygwin\usr\bin”).

To start emacs from a shell prompt, just type “emacs” or you can type “emacs filename.txt” to open a file (in this case, called filename.txt). In a later installment of this series, we’ll look at how to open files from within an already started emacs.

To exit emacs use “C-x C-c”, which mean to hold down the “Control” key while hitting the “x” key, then hold down the “Control” key while hitting the “c” key.

Under Cygwin, there’s also a startmenu shortcut conveniently created by the Cygwin setup program. Or if you’re really smart, you can use Launchy (like I do). 🙂

If you’ve got X installed (which you should), you’ll get a graphical emacs like this:

Emacs in X

If you’re not running X, you’ll get a slightly less friendly looking version of Emacs, like this:

Emacs

There’s also a native Win32 Emacs version.

From what I remember of being a Mac version, you could get Gnu Emacs from Fink, or there are some native Aqua ports of Emacs.

If you’re a Linux user then it should of course be easy to get a copy of Gnu Emacs from your preferred Linux distribution. 🙂

Next time, we’ll talk about actually using the damn thing. 🙂

______
Other installment in this series: part one, part two, part three, and part four.