[ Content | Sidebar ]

Book Review: The Rational Optimist

October 17th, 2011


The title attracted me to borrow The Rational Optimist from the library. Rational and optimistic are characteristics for which I strive. I also picked up the book hoping to find positive actions to address great challenges. I read it, was disturbed and disappointed.

The author suggests that all generations are presented with potentially cataclysmic risks, and they are overstated or overcome. This disturbed my sleep for a couple of nights as it forced introspection. Was I abandoning my normal optimism? Have I succumbed to line of thinking that is not rational.

No.

I turned to the back book jacket looking for credentials, and came up short. I dove into the reference notes, finding them to be thin, repetitive and lacking in peer-reviewed science. This book is an op-ed piece.

The themes presented are that prosperity solves our problems and that many problems are framed too pessimistically. Or stated like the pop song title, Don’t Worry, Be Happy. Problem is, the book is too hollow to substantiate don’t-worry-be-happy as a rational position in the face of great risks.

Anyone reading this far and being familiar with dynamic systems may appreciate that the author and publisher have time constants on their side. Consequences of the potential events the book explores will play out in a time frame much longer than the 15 minutes of attention the book may enjoy.

So I close with one of my common themes: Don’t out-source your thinking.

Book Review: China’s Megatrends: The 8 Pillars of a New Society

February 23rd, 2010


China
China's Megatrends: The Eight Pillars Of A New Society
John Naisbitt; Harper Business 2010
WorldCatLibraryThingGoogle BooksBookFinder

Fundamental, lucid and much more rational observations of the change, progress and realities of China than you can get from the media or politicians. One of the best explanations about China’s self interests, constraints and self-perceptions as it grows dramatically.

I especially appreciate how John and Doris Naisbitt describe the legitimacy of the government as performance-based. Makes great sense. More akin to a corporate management structure than outdated perceptions of what we think we know. Also eerily parallels influence of large corporations on society and culture in the West. Makes me want to get on a plane, set up a business and find out for myself.

I look forward to reading Peter Navarro’s The Coming China Wars and seeing if the ideas fit the framework China’s Megatrends lays out.

Book Review: Total engagement how games and virtual worlds will change the way we work

January 21st, 2010


Games At Work
Games At Work: How Games And Virtual Worlds Are Changing The Way People Work And Businesses Compete
Byron Reeves; Harvard Business Press 2009
WorldCatLibraryThingGoogle BooksBookFinder

This a profound set of ideas, although they are hard to describe. I liken the book to looking at the early maps the European explorers generated of North America. Something valuable here, but the resolution is low at this early stage and we do not have enough experience. Still, the ideas are spot on.

I plan to return to Total Engagement for a second read in a few months, and see how the temporal impressions change. I suspect we will be referring to this one for some time to come.

I met Byron Reeves a couple of years ago at a conference, and follow his work. You should too. He is a professor at alma mater Stanford.

Hat’s off to my local library for ordering a copy for their collection after my inquiry about the title. If you are not engaging with the incredible new services your library is offering, you are missing out.

Climate Scenario Planning: BuzzKill for Dinner Guests

December 11th, 2009


I was privileged to attend Ben duPont’s Non-Obvious dinner last night.  Ben assembles a wonderful constellation of brains from academia, business, government and media, and poses the challenge:  Postulate a scenario at least 5 years in the future and give us the implications, with emphasis on the non-obvious.  Fascinating stuff.

Attending for the second time, I always leave the event with enthusiasm and hope for our future.  Those that know me would agree I am an optimistic person.  And so I caution you that the scenario I presented last night, and share here, is uncharacteristic and dark for me.  It is a scenario sketch intended to stimulate action and prosperity, not to drive people into their bunkers.  It is not a prediction.  That said, you may not want to serve it to your dinner guests.

Here goes:

We Fail to Address the Cause of Climate Change Before a Major Upset, and Have to Shift to Mitigating Temperature

Greenhouse gas inventories and trajectories will awaken one of the so-called ‘sleeping giants’ (Prof. Laurence Smith, UCLA):

  • Disruption of the Atlantic thermohaline circulation
  • Melting of Greenland ice sheet
  • Un-sticking of the frozen Western Antarctic ice sheet from its anchor
  • Disruption of Indian Monsoon
  • Disruption of El Niňo patterns
  • Release of methane trapped in permafrost
  • Rapid die-back of Amazon forests.

Greenhouse gas mitigation arguments are abandoned in favor of geo-engineering projects instead, shifting from prevention of cause to mitigation of effect (temperature), e.g.:

  • Manufacture of bright marine clouds or marine cloud reflectivity enhancement
  • Dispersal of sulfur dioxide in upper atmosphere mimicking Mt. Pinatubo effect (20 million tons SO2 resulting in -0.5 deg. C. effect)
  • Space shield.

We have a global economy, but no global governance to address inequities and unintended consequences.  IF the funding for geo-engineering at global scale is assembled, the project organization will emerge as a global, quasi-sovereign entity.

National governments, including the U.S., failing to have addressed the run-up, will fail.  After all, in the U.S. we are conditioning the population to blame any incumbents for disaster – deserved or not. [Note: I grew up on the coast of the Gulf of Mexico and experienced many hurricanes.  I was surprised to learn Katrina was, listening to media anyway,  'the government's fault.']  Regional governments will emerge and dominate as local populations band together to garner scarce resources and defend against climate refugees.

Meanwhile: Carbon Management will Force Redesign of Supply Chains

We may get carbon management legislation.  More likely, our industrial leadership will rise to the challenge of mitigating our fossil fuel appetite.  After all, this is too important to be left to our political process.

When we do, managers and stockholders are generally unprepared and unaware of the carbon distribution across supply chains. Our current supply chains are optimized for low cost without cost of carbon (and many other externalities) associated with burning fossil fuels.

Managers will catch on:  if you own it and it burns fossil fuels, YOU are responsible.

This insight will generate:

  • Massive de-construction and re-construction in different configurations of our supply chains
  • Divestitures and many new business entities
  • Lots of arbitrage in real, fuel burning assets
  • Trade in allowances and credits as everyone now expects.

Smart people are needed to do this work.

Sharp Greenhouse Gas Accounting

June 27th, 2009


…or perhaps Sharp Greenhouse Gas Accounting with Fuzzy Measurements

I am deep into greenhouse gas (GHG) accounting and reporting requirements just now, including ISO 14064, 14065 and training at the Greenhouse Gas Institute. This a partial explanation for the period of silence on this blog. Some readers might question whether GHG accounting topics fit on Crustybytes.com.  It turns out our perceptions of GHG are information-based, and despite the fact that each of us are responsible for producing tons of GHG each year, we generally don’t see or sense them.  As I write this post, the U.S. House of Representatives has just passed the American Clean Energy and Security Act which includes Cap & Trade for carbon/CO2 equivalents (CO2e) which will have considerable impacts on business.  Finally, each of us has to decide our position on climate change and what we are willing to do about it.  All of which is to say the subject fits the Tech, Biz and Open Source Brains meme for Crustybytes.

I am particularly interested in calculating CO2e for stationary combustion sources for numerous reasons:  they are one of the largest man-made contributors to atmospheric carbon, they are relatively well-instrumented for good measurements and therefore insight, and they are well understood.  So when a training exercise presented a chance to compute CO2e emissions for a fictitious company, I got the chance to do what I enjoy — take the apparatus apart in order to understand it better.

My challenge is to understand the confidence we can expect for calculated CO2e emissions.  These results are or soon will be the basis for complying with regulations, and may become the basis for observing caps (limits) and the buying and selling of offsets and credits.

For my purposes, I narrowed my inquiry of the exercise to stationary emission sources that burn natural gas.  I will not try to go into the detail of the calculations here, but characterize them as relatively straightforward and modestly rigorous.  Understanding accuracy or confidence is a matter of understanding the variability (the ‘fuzziness’ in the subtitle) that can arise from all the bits that make up the answer — calculated CO2e emissions.  The ‘bits’ as I call them, fall into three categories: field measurements or records; constants and conversion factors; and Global Warming Potentials (GWP).

I frame this discussion to be about uncertainty, rather than the other side of the coin, accuracy. I quantify uncertainty as the amount added or subtracted from a stated value that defines a range where 95% of the time a result can be expected to fall within that range.  For statistically normal distributions of samples, this is approximately 2 standard deviations.

Sources of Uncertainty

The constants and conversion factors involved generally are derived from first principles,  can be accepted as fixed and not subject to uncertainty (without redefining our understanding of chemistry). Fuel gas measurement, fuel gas composition and percent combustion are subject to uncertainty, and factors are assigned consistent with typical measurement technologies, fuel system variabilities and equipment performance, respectively.  Uncertainty is always present in these numbers, though we are adept at ignoring it.  We are particularly prone to accepting numbers verbatim from instruments we do not understand and computers.

GWP values are highly uncertain, reported to be as high as 35%. However, by convention GHG organizations have agreed to use the same values globally — so keep them fixed.  No need for outrage, there are precedents.  We do it all the time with the electric meter or the gas pump — we ignore uncertainty and use the measurement as the basis for a business transaction.

Method

Here’s the approach:

  • randomly assign a value to each of the uncertain variables (fuel gas measurement, composition, combustion fraction) within the range elected (+/- 2 standard deviations)
  • record the result
  • repeat the above — a 1000 times or so, 10,000 or maybe a million, shown below
  • analyze the set of results, such as mean, standard deviation and for me, a graph works best.

Histogram of CO2e Emissions Calculations with Uncertainty

Implications

Take a single sample at any given hour in a year (8760 hours in a year), and expect the result to fall somewhere on the curve.  Understanding the curve and the probability of landing on a specific location on it, gives quantitative confidence to regulatory reporting numbers or to sizing the margin under a cap that might be sold to others.  We do not yet have a price for a ton of CO2e in the U.S., but this week’s European price is approximately $18.84 U.S. making 1 standard deviation on the graph above worth ~$14,526 U.S. per year. So sharp greenhouse gas accounting may pay.     db

Probabilities and Simulation: Shopping in Abundance

March 17th, 2009


Titling this post was a challenge: The Engineer and the Mayonnaise was tempting, but I am reluctant thinking about what Googlers might be thinking if the search engines bring them here.  No matter.  The nameplate says “Tech, Biz and Open Source Brains,” and we will get around to all three here.

This post is about abundance, which is a persistent theme with regard to information technologies here at CrustyBytes.  This morning I shopped for the annual purchase of mayonnaise.  My store had 38 distinct choices of mayonnaise.  When I counted, several shoppers steered wide to put a more comfortable distance between themselves and me.  I marvel — what an incredible abundance and what a marvelous supply chain that can deliver 38 choices for that once-a-year moment when I will buy the stuff.

Human brains are wired to expect scarcity, or more accurately, to prefer behaviors that would improve our chances of survival in the face of scarcity.  So confrontation with 38 choices of mayonnaise is not something we are tuned for, evolutionarily speaking.  Choose poorly and I face a year less than delighted with my choice of mayonnaise.  I lived 4 years in Europe, where my neighborhood store and supply chain were tuned to bicycle and pedestrian patrons, and  offered 2 choices of mayonnaise.  I vividly recall repatriating to the U.S. , where the supply chain is tuned to the SUV-equipped shopper, and feeling absolutely paralyzed at the sight of the choices presented — mayonnaise and everything else.

What to do?  3 choices come to mind:

  1. If I had a therapist, I could ask her for coping skills for the anxiety.  After all, I will be living with the choice for a year and what if I get it wrong? Think of the buyer’s remorse.  On the other hand, thinking one needs to talk to a therapist about mayonnaise anxiety is a sure sign of needing a therapist, and my budget is not in shape for that level of recursion.
  2. I could model my dilemma in the form of an equation, and solve for it.  The years have de-tuned my write-down-and-solve-the-equation skills, but it could be done.
  3. I could take some of the ubiquitous, free computing I am always talking about and run a few hundred thousand simulations in search of mayonnaise choice happiness.    …     Yep.  That’s my ticket.

Here’s how I modeled it.  First, apologies to my friends at P&G who know so much about this stuff that I am sure they know what I will pick, and why, before I stop the shopping cart.

  1. 38 choices.
  2. Despite my refined palate, I would be happy with some number of them.
  3. I try a small number of them, and record the one I like best.
  4. The question becomes, how many to sample to be reasonably sure I get one that qualifies as one of my winners.

Here’s what the results look like for the case where I imagine there are 3 which I would accept as best.

mayo-400w1The results show I have to try at least 4 before I get a better than 50:50 chance of selecting one of the three I imagined as best for me.  After trying 9, the chances diminish smoothly.  The logic is well explained in Digital Dice, by Paul J. Nahin, in the context of dating and marriage. The stakes for mayo are much lower.

The simulation was done using Octave, an open source math package available free under the GNU General Public License. Powerful, elegant and free — yet another amazing example of the abundant information technology available to everyone.  In fact, here are the components I used:

  1. Octave for a simple program to run 10,000s of simulations;  running thousands now, while writing this post, with no noticeable degradation in my computer’s performance for my typing.  In this sense, the computing processor is free for solving these simulations.
  2. Octave has an installed java-based plotting package, jhandles, for generating wonderful plots.  Because it just works, you and I do not need to know this, other than to acknowledge the fine work (Michael Goffioul) and powerful results. I  subscribe to the picture-worth-a-thousand-words school of thought.
  3. Picnik to edit/crop the plot output into a pleasing form.  Free and no registration required.

Here’s the real headline: the $2.34 I spent on the jar of mayonnaise is more than I spent on the vast computing and simulation capability at my fingertips to explore the whimsical dilemma of making a shopping choice in the face of abundance.  What’s more, I had to drive to the store to buy the mayo.  The simulation tools and the documentation, came to me over the internet.

We live in an amazing time of computing abundance, and have hardly begun to realise the implications or possibilities.  db

Book Review: Digital Dice Computational Solutions to Practical Probability Problems

March 17th, 2009


Digital Dice
Digital Dice: Computational Solutions To Practical Probability Problems
Paul J. Nahin; Princeton University Press 2008
WorldCatRead OnlineLibraryThingGoogle BooksBookFinder

I was blown away years ago when I first learned of Monte Carlo simulations and how they can be applied to big, serious problems. Reading Paul Nahin’s book, I realized the beauty of applying these techniques to everyday problems. Then I discovered Octave, the open source numerical programming language and alternative to Matlab. Now it became really fun.

My whimsical example from personal shopping is described on another post at CrustyBytes.com.

A Fair Trade for Email: You Decide.

February 25th, 2009


A persistent theme in these writings is the abundance of computing power available for free, or almost free.  In the Featherlight series at ExecutiveEngines, we are illustrating how much critical IT horsepower can deployed in a startup for much less than the daily cost of a fancy coffee, a latte grande. These ideas are forged from the real-world, as I have had the privilege of setting up resources for a number of startups and enterprising individuals.  One of the free apps is the standard edition of Google Apps (for your domain) including email.  Google offers a premium edition of Google Apps for currently $50/user/year, which is a bargain — but that is another discussion.

The very substantial email capability Google provides for free, comes via a value exchange, a term which we have covered before.  In return for the email, Google gets to provide the sponsored links that have become so ubiquitous with search on the right side of the web view of mail or as a short text string at the top of the inbox.  I get an abundance of reactions to this.

The issue of email privacy is the issue I want to explore here.  Anecdotally, the vast majority of the people I set this email up for don’t seem to think about it — perhaps because we have become to accustomed to the aforementioned sponsored links that appear with our routine search results.  When I explain how it works, I see a few individuals become concerned, or even alarmed.  Here’s the deal:  the links are generated by processing the message contents for keywords that advertisers buy, brokered by Google.  So the machines are looking at your message, and serving up links that might be relevant in hopes that you might click one.  The part that raises concerns is the fact other parties, silicon-based or otherwise, are reading the mail.

There is no privacy … not sure there ever was.

The discomfort arises from the notion that what you, person A, say to person B on email is thought to be private and confidential.  If you are thinking about an email at work, game over.  Chances are, the email and the contents are the company’s, not yours.

Perhaps this expectation of privacy stems from our experience with the postal service, where we inspect the condition of the envelope upon arrival, and if in good condition, we presume it has not been read by others.  Anti-tampering laws may give some comfort that law-abiding people and companies are not reading our mail.  Of course, we often throw sensitive correspondence in the trash, where it can be picked and read by motivated miscreants.

Or perhaps we inherit the notion of privacy and confidentiality from our use of the telephone.  We expect that person A and person B, if each is in a private space, can conduct a conversation in private — unless a judge has issued a court order authorizing a tap.

Both cases, postal and telephone, seem to be some comfortable status quo where we prefer not to think how brittle our privacy may be.

Your glass house

What if I told you:

  • how much you paid for your house, and the size of your mortgage, is just a few free clicks away for anyone who wants to know.  Makes you wonder why there is still a taboo against discussing it at cocktail parties.
  • data about items you buy, including those which are known to be hazardous to your health and insurability, for how much and when, is sold to third parties you do not know.  Your affinity or loyalty card at the grocery store secured your permission to do so, often in exchange for a few cents off on items you could have bought for the same price at the discount store.
  • that you carry a microphone in your pocket, that can be remotely turned on to listen to any conversations in the room.  Again, a court order is required, but it is done.
  • that the time and date your car’s transponder — you know, that EZ Tag on the windshield — passed a reader is logged and available.  My local turnpike authority was embarrassed when a simple hack was published that allowed anyone to query their system for the logs of the account of anyone else; the easy security hole was quickly closed, but they still keep your data.

All of which is to say, many aspects of our lives are no as private as you imagine and in many cases, we willingly trade convenience or a few cents for our privacy.  This is like the old joke:  ‘we have already established the vice, all we are doing now is negotiating the price.’

Your best defense

So what do you do about it?

Accept the trade, unless you have a lot of money to waste on a private email servers and maintenance.

More important, do no harm in email. Email lasts forever.  Imagine your emails will be read by your worst enemy’s attorney in public, to  your parents, grandparents and children. Sobering image.

The reckoning

It is the inconsistency that screams out.  Volunteer privacy compromises daily for pennies or convenience, then refuse free enterprise-class email services because Google wants to have their machines process your message contents for words that might trigger ad-words they sold? Think it through.  db

Dark Clouds and the Economics of SPAM

February 2nd, 2009


Since publishing the post on Realizing the Bounty of Free Computing, I have been socializing the concept with many people including business execs, venture capitalists, scientists, and a host of other smart people I have the privilege to know and who have tolerance for my ideas.  And while they are uniformly kind, it is clear that the phenomenon and the potential is not yet real to them. This week a couple of items crossed my radar screen that may reveal some insight.

Dark clouds

I heard Phil Windley and Scott Lemon interview Andre’ M. DiMino of the Shadowserver foundation on Phil’s Technometria podcast at IT Conversations. Andre’ describes the organizations efforts to track bots and botnets across the internet.  These are computers, servers numbering in the 1000s and personal computers numbering in the neighborhood of 750,000 at this writing, that are compromised with nefarious software, aka malware, that can be tasked to do the bidding of the botnet commander.  Botnets are well explained at Shadowserver.

Make no mistake, this is an example of utility computing.  Except the owners of these computers have been duped into letting their machines be conscripted to do the deeds of the commanders.  These deeds are often illegal, like SPAMming.  From previous posts, you learned I am no fan of the label Cloud Computing.  If we did apply the cloud metaphor, this use would be the dark side.

Also make no mistake, the people who create these bot networks are clever and resourceful.  And while I have argued that abundant computing is almost free, it is really free for these people.  Isn’t it interesting that the dark side provides early adopters and innovative exploits of this new resource?

Economics of SPAM

Phil Windley also mentioned a fascinating article by BBC, Study Shows How Spammers Cash In on SPAM.  The article reports a study conducted by a group of researchers at the University of California, San Diego, led by Stefan Savage.  They hijacked a subset of a bot network (yes, there is no honor…), inserted their harmless SPAM directing customers to a fake pharmacy site appearing to sell a herbal supplement for libido enhancement, and counted.  Here is the summary:

  • 75,869 computers were hijacked
  • over the course of 26 days, 350 million SPAM emails were relayed
  • 28 user clicks resulted, that would have been sales had the harmless website processed them (they didn’t) for a total of $2,732 or an average order of $98.

The paper describing the study is here.

28 sales per 350 million emails is a yield of 0.000008% or 1 per 12.5 million emails. $105 revenue per day. Use of 75,869 computers for $2732, or the use of more than 27 computers per $1 revenue. Remember the researchers only hijacked a subset of the bot network, which they estimated to be 1.5% of the total.  The researchers estimate the full network would yield $7000 revenue per day.

Pause and reflect on how much free resource the botnet commanders exploited for modest gain.

Post office sees less mail volume, in part due to the internet

Also this week, the Postal Service reports a reduction in mail volume, in part to what Postmaster General John E. Potter explains as,  “a revolution in the way people communicate has structurally changed the way America uses the mail,”  and offered as part of the rationale for reducing some mail delivery from 6 days to 5 days per week.

This news arriving the same week as the SPAM economics above, prompted me to do some back-of-the-envelope calculations comparing the economics of a direct mail campaign versus a botnet SPAM relay campaign.  Granted, I am not including the case of a legitimate (!?) bulk email campaign.

Turns out I can’t responsibly make the comparison.  Sure, I ran the calculations but there is no rational market for a 350 million piece direct mail campaign.  Some things do jump out.

Asymmetric Costs

The cost of the message payload in the case of direct mail, the printed envelope and contents, has approached some low-cost asymptote from years of cost pressures, but is still probably in the neighborhood of $0.22.  If 350 million pieces were rational to send, the payload cost would be in the tens of millions.  The weight at 0.5 ounces per piece would imply well over 5000 tons.  Of course, the cost of the payload in the case of email is nil.

Also jumping out is the cost of transport, effectively zero in the case of SPAM.  According to the calculator at the postal service website, we can expect another $0.22 for postage per piece of direct mail.  Again, tens of millions in expense.

These wildly different costs drive wildly different behaviors.  Free or almost free means you can afford to oversend, without any regard to being selective about it.

Not so great versus terrible response rates

The direct mailers tell us to expect a 2.15% response rate for a well-designed campaign.  That’s not so great, but the business case works and lots of companies and products depend on it. I suspect the 2.15% rate holds with the typical direct mail campaign in the range of 10s of thousands of pieces, not 100s of millions.

The 0.000008% response rate reported by UCSD study is terrible, something like 250,000x worse than direct mail. The business case works too, although most classical market analysis would classify it as a niche. I like niches.  Chris Anderson would call it the far end of the long tail.

I must point out, that as much as I hear people protesting SPAM, the reason we have it is that it works.  There are a few people out there who will buy in response to it.

Implications for using the Cloud

Developing and prospering from niches is not new. Realizing the benefits of utility computing is not as far off as you may think.  Clever people exploiting the dark side of the Cloud have been doing it for years.  Check your inbox.

Time for the rest of us to step up and participate. db

Choose Wisely When Using the Cloud

December 27th, 2008


Nick Carr asked a simple question on his blog:  “Are we missing the point about cloud computing?”  He goes on to share an example from Derek Gottfrid at the New York Times, where Gottfrid solved a big problem converting 4 terabytes of Times TIFF files to PDFs using Amazon’s Elastic Compute Cloud (EC2).  100 virtual computers working for something under 24 hours at a cost of $240, and out comes 11 million PDFs. I am speculating here, but I imagine Gottfrid put the $240 on his credit card.  The mission according to Gottfrid:  “The New York Times has decided to make all the public domain articles from 1851-1922 available free of charge.”  Very cool.  Read Gottfrid’s account here.

Point is, utility computing (the preferred, more descriptive label than cloud computing in this case) solved a juicy new problem, not some warmed over set of requirements well-served by Times’ current systems.

When you read Clayton Christensen’s Innovator’s Solution, you’ll find fascinating models which describe the utility of a product and they may help us select the right problems to solve with cloud computing.  Christensen expertly develops the hypothesis that any new product introduced that fits on the current trajectory – the continuum of functionality vs. utility – is subsumed by the incumbent suppliers that inhabit the curve.  Christensen goes on to assert that innovators that enter a market at some point on the curve or an expected point on its trajectory, will get crushed by the incumbents whose products already inhabit the curve.  His lessons?  Incremental and expected improvements revert to the benefit of the incumbents.  And if you want a distinctive, defensible position in a market, find an off-trajectory position that the incumbents cannot or will not attempt to serve.

Christensen’s model is extensible beyond a simple product.  If we consider “product” to be a “solution” consisting of a set of processes, people and technology, the model still holds.  Introduce a problem / potential solution into an organization that falls near the [improving] trajectory of existing processes, people and technologies within the organization, and the existing organization will handle it as it always does.  That’s inertia, and it resists disruption (Christensen’s term), the derivative of true innovation.

So what does this have to do with cloud computing?

Returning to our lessons above, cloud or utility computing applied to solutions we may reasonably have achieved with incumbent processes, people or technologies will likely not be innovative, disruptive or frankly very interesting.  They may indeed be cheaper, and the wheels of the competitive markets will turn over the next years to find some new equilibrium in a cost-driven model.  This is the evolutionary progression in computing of the last 40 years as we traverse from service bureau computing to corporate mainframe to departmental mini to personal computer to client server, and so on.

The spackling over of all the computing stuff we have now with the hyped “Cloud Computing” label is happening with abandon.  Good news in that awareness and buzz is high; bad news in that the inevitable post-hype backlash is coming.

What to do?

Choose wisely in selecting your problems and how you frame them to achieve the breakthrough advantages of utility computing.

I have no first-hand knowledge of the NY Times, but I imagine their IT and finance processes and people are top shelf.  Go to any well-run company’s IT shop and ask for 100 servers, or go to the ‘New Applications’ window and try signing up a project like Gottfrid’s.  You’ve chosen to play on the trajectory of the IT Infrastructure group, the new applications group, or insert some label from the current org chart here group.  Good luck.

Instead, pick a problem not served by some application already in the data center, one considered impossible by the professionals or better yet one they cannot or will solve for you.  Even better, try a business model with the most tricked-out computing requirements without owning any servers.  Pull out your credit card and get going.