The End of the Enterprise Control Plane

While working to support a large-scale Enterprise cloud migration program, I’ve been giving a lot of thought to the question about what it means to be an Enterprise IT organization at the far end of such a migration.  There are many wide-ranging impacts, of course, but one of the near-term considerations is what it means to “operate” in the cloud.

In the world of data centers and large infrastructure teams, one of the most fundamental principles is that efficient operations require a set of broad tools which can be used to control a variety of common functions.  This led to the growth of large suites of IT Operations and Management tools from IBM, CA, HP, and BMC in particular.  These tools allowed systems and operations teams to centrally maintain, monitor, and control the vast array of servers the Enterprise IT teams were responsible for in a scalable manner.  And it made a lot of sense.  Without this type of control plane it was almost impossible to maintain any semblance of cost or operational control, and I’m certain that many implementations were driven by embarrassing outages, expensive security breaches, or aggressive cost control measures.  “We’re a big IT shop.  We need a strong mechanism for control.”

Enter the cloud.  Most early adopters of Infrastructure cloud services in the Enterprise (running almost entirely on Amazon Web Services) were isolated, and often outside of the IT department itself.  The notions of operations and control were either foreign or the frustrating bottleneck to escape.  But as The Cloud started to attract executive attention, either because of concerns or excitement about these fringe internal activities, several things started to happen.  The first was the turn of attention from existing Enterprise IT vendors, who started to develop their own cloud-like services, typically built as software solutions to be deployed on their existing lines of hardware.  Next, seeing the increasing momentum of public cloud services, they began propagating the idea of the hybrid cloud, the pleasant notion of simultaneously running operations in both on-premise “private” clouds, as well as in public clouds like AWS, Microsoft Azure, and Google Compute Engine.  Simultaneously, a few start-up vendors emerged–in particular Rightscale–who were looking to make multi-cloud deployment and management possible.  While initially this market focused on public cloud environments, the natural evolution, validated by the entrance of the traditional vendors as well, was to look at the full portfolio of enterprise cloud operating environments, and so emerged the Cloud Management Platform market.

With these vendors came a commonly repeated product aspiration:  to provide Enterprise IT with the “single pane of glass.” This would allow Operations staff to keep track of all of these new cloud things, move workloads seamlessly between private and public clouds, and do all of those Very Important Activities that were critical for central IT to perform.  It sounded great!  It made perfect sense to today’s IT executives, and so was an easy sell for vendors–new and legacy–who wanted to keep up the sales pipeline.  The same Enterprise Control Plane, re-imagined for the cloud world.

The “single pane of glass” IT operations model: Rational.  Sensible.  And completely wrong.

The problem with the application of the Enterprise Control Plane to the cloud is that it brings forward all of the old assumptions about what is necessary to securely, effectively, and efficiently manage infrastructure.  When Infrastructure is a tangible thing, you first need a way to manage all of its physicality–power, cooling, installation and maintenance of the hardware and its components.  You had to know if something was going wrong, and know when to get it fixed, and you replied on your operations software to do this.  Hardware in the cloud?  It’s still there, but you don’t have to care any more.

Next, the actual systems need to be built and installed.  While this has become less of a craft business than it once was, building the necessary components to make it more scalable–leveraging, of course, your newly licensed add-on component for your operations software–is still labor and capital intensive.  While the cloud doesn’t always eliminate your specialized system designs, in many cases it can, and at the very least you don’t need to buy, install, and configure all of the extra software to make it simple for your community–whether they be developers or system admins–to deploy new servers.

Within operations, a large control plane was how ongoing functions such as releases, configuration management, monitoring, logging, and auditing could be performed, and these were the domain of Infrastructure professionals.  Again, these are all still necessary in the cloud, but an entirely new set of services and tool chains have democratized their implementation and use.

The net of all of this is that the value once provided by infrastructure staff–and the hardware and software they managed–has been simultaneously commoditized and dispersed.  And as the data centers begin to close, the era of vast Enterprise IT control planes will close as well.

The functions once provided by the Enterprise Control Plane have not become irrelevant or unimportant–it’s just that the locality and means by which they are provided has changed.  A large part of their one-time value is now embedded within the services offered by cloud vendors, and another segment can be adequately managed closer to the point of value, namely within the business or software management and development teams.  There will likely remain services that look a bit like Infrastructure today, but the emphasis will shift from one of control to one of enablement, a means to support the actual business objectives more directly and in a far less monolithic fashion.

While it won’t collapse, I think it fair to say we are at the end of the era of the Enterprise Control Plane, and are at the start of the era of Enterprise IT Enablement.  This isn’t a value judgement, only the recognition that the IT Infrastructure service model which made sense for so long is radically changing, and with it so must our assumptions about what is required to deliver and support services in this new world.

Random Thoughts on The Semantic Web

The idea of the Semantic Web has been simmering for some time now, the (next) great idea of Sir Tim Berners-Lee to move beyond just linking documents to providing them with a structure that would allow for the programmatic discovery of links, and create along with it another great wave of innovation.  It has been simmering largely because there was no compelling, singular current benefit to be gained from creating semantic content, despite requiring significantly more effort to create than just authoring “regular” content.  To many people, the whole notion was too far-fetched, a neat idea that was completely impractical to implement and so a waste of time to pursue.

Recently, though, the cynicism which the Semantic Web encountered early on seems to be dying back.  This is partially because the Semantic Web proponents don’t talk as much about the Grand Vision, and also because there have been some successes and inroads.  The most notable of these have been with Linked Data (it is easier to impart semantic structure to data than to content) and with various simple semantic structures (RFAa, microformats, FOAF, GoodRelations to name a few).

Some of the success in both areas has been driven by the fact that much of the content today is generated through web applications like content management systems, which have created data fields for the capture of additional information besides just “content.”  Though the purpose of such structure was originally to achieve consistent formatting for websites and aid in search engine optimization, once in place it became very easy to have it generate semantic mark-up as well, even if there wasn’t any particular use defined.  The other reason is that there have been a number of tireless champions diligently working for so long now that  they have actually managed to create interest, change, and actual content; small works over long periods eventually begin to create noticeable effects.  Because of these forces the semantic web is becoming a reality, even if it isn’t appearing as a great fireworks event, or in its most elegant and pristine form.

A few more specific examples might help illustrate why people are interested in the Semantic Web. Before I left Harvard I met with some of the people developing a platform they called the Scientific Collaboration Framework.  This project was building a system so that scientific communities of interest could create a collaborative workspace to make it easy to share research, papers, ideas, data and other things.  Despite the fact that scientific communities are relatively small, it is very easy to miss research which is similar or complementary to your own when  it is not occurring in the specific confines of your particular research discipline.  The framework sought to promote communities around topics instead of disciplines, with a considerable amount of effort spent making it easy for researchers to add content that could be automatically tagged and referenced to semantic terms to aid in the discovery of new collaboration opportunities.  One might call it social networking for data, in the sense that by posting your own research, you could discover other people who were working on similar problems, which could unearth both data useful for your own investigations as well as other researchers whom you could create new and novel collaborations with.

More recently I’ve been following the development of GoodRelations, a semantic approach for commerce (very generally defined) which defines a standard way to describe companies, products, and offers.  Once so described, it becomes considerably easier for the data to be utilized in any number of ways, such as submitting it to product search engines.  In general, creating this type of semantically structured data should help companies achieve greater visibility in the marketplace by making it easier for consumers to find information when they search.  But there is also the larger prospect that instead of relying on users to do just the right search, it could become possible for a system to actively place the right product, at the right price, at just the right time to the right consumer who is looking to buy, which some might call the holy grail of marketing.

The general promise of the semantic web, then, is to make it much easier to find and utilize interesting connections between objects—people, data, products, anything really—where historically it would have been extremely difficult to identify the connection.  This can be any number of things, from the examples above to things like job seekers and employers, apartments and renters,  a restaurant you didn’t know about, an article relevant to the blog post you’re writing, the long-lost cousin who just moved around the corner from your best friend from college.  So instead of relying on random chance to create connections and surface useful information, the connections can be discovered instantly when needed, or even in advance of us knowing that there might be an interesting connection to be made.

Regardless of how it evolves, I think there will be very fascinating things appearing which are built upon semantic web technologies, some of which will be very useful but not obvious (like Google’s Rich Snippets, which add the Yelp reviews to your searches) and others which will be considerable more ambitious (like Siri, the personal assistant, which was acquired by Apple).  As more data and content is generated, tagged, or transformed to include semantic elements, the promised innovation will come, and we will find it ever easier to discover information, and to have it discover us as well.

Mobile Everything

It isn’t much of a stretch to say that the iPhone set into motion a significant transformation when it was launched, changing the phone from a device used to make calls and take pictures to an entirely new computing platform.  While the current impacts have largely been around how people interact with information online, the coming years will show that the more lasting impact will come because it transforms how we interact with the world itself, and that this change will be the defining technological trend of the coming decade (a view which is certainly shared by people who spend far more time looking at these things that I).

Our current mode of interacting with information online is principally session-based:  we sit down at a computer and interact for a time (though sometimes a very long time), and then stop, and leave the online world to rejoin the physical world (or as the collegiate taunt went, “Log off the computer, log onto life!”) So when those times come that I’m away from my computer but think “I should look that up online”, the transaction barriers to actually going online (walking to my computer, finding my laptop, waiting for everything to come out of sleep or power on) often leave me thinking “I’ll do that later.” Usually I don’t, though, because what I was interested in was only relevant at that moment, or I’ve just forgotten the task.

With a smartphone or other device which is really more personal (like the iPad), we usually have it on (or very near) to us, it takes very little time to initiate your online engagement, and in many ways, is much easier to disengage from as well.  The interactions are no longer in a session, but instantaneous and frequently transactional:  we do a little task and then stop.  I keep my grocery list on my Android phone, and now when the kids complain that we are out of string cheese, I quickly add it to my list.  I don’t have to hunt for a scrap of paper which I’m likely to loose, or stand in the store remembering only that there was a request for “something.”

Even more unusual, smartphones, now have access to information which is both contextual and personal to you.  The applications we use on our computers derive input almost exclusively from the keyboard and mouse, and maybe a web camera or microphone.  Our new devices gather an entirely new range of inputs—location, orientation, acceleration, ambient light, magnetic field, temperature, proximity—all of which are available for use by different applications, in addition to the standard inputs of text, voice, and video.  The use of just the location sensor has created a huge number of new applications and interactions, many of which are either completely nonsensical on a desktop computer (such as Foursquare and other location check-in services) or become far more valuable on a mobile (like the Zillow application to see real estate for sale around you).  Even more functionality will come with the addition of RFID readers to mobile phones, which could allow everything from mobile credit-card payments, building access, and the ability for your house to know you are home and turn up the heat. Other innovations add their own device or sensor to the phone,  like a credit card reader and application that allows anyone get paid from a credit card (friends short on cash at the pizza place?   No problem, you can take credit even if the pizza place doesn’t!).

Because we’ve never had this type of ubiquitous device with a massive stream of personal data (created, discovered, and sensed) flowing in (and out), much of the innovation will be things we can barely even conceive of now, because there exists no analogue. This is why the impact of mobile computing goes far beyond just access to information and how we go online, and starts to thread through our “real” lives in very significant ways.  Though we will need to think about the inevitable tradeoffs which result (privacy and security being the elephant in the room—but that’s a vortex of discussion to descend into later), it is clear that these little devices are going to impact our lives in ways just as significant as the personal computer and the internet have in the preceding few decades.

Do Web Applications Suck?

A recent rant about the state of the web has made its way around the tech chattersphere, and is part of the growing commentary on application development for native applications (something you install, on a PC, Mac, iPhone, Android) versus web applications (written to work in any of the major web browsers).  The general gist is how fantastic it is to see the re-emergence of cool native applications, and how much more robust, functional, and well-designed they are compared to web applications, because they are liberated from the lowest common denominator approach of web standards and can take full advantage of the specific device they are running on.

So are web applications now the equivalent of MS-DOS applications, circa 1990: dinosaurs in the face of applications with windowed, mouse-controlled user interfaces?  Or is this just excitement over what’s shiny and new right now?

Context, part I:  One computer to many devices

It wasn’t too long ago that everyone was celebrating the death of traditional (aka “native”) applications and the liberating rise of web applications.  In the past, you had to have your floppy disks, your thumb drive, or your e-mail file transfer system to shuttle data between all of the local native applications you ran.  You had to hope your friend had the same software installed to show them your work.  And you got annoyed.  You didn’t have the right version…of the file, the software, whatever. Or there just wasn’t any way to do the job electronically at all, because the software the system used was too expensive or too complicated.

So when the web application revolution began, it was liberating.  You could get to your e-mail from any computer, spend endless hours searching for the best airline fare without a grumbling travel agent, file your reimbursement before you even got back to the office,  and unleash your creative skills designing your next holiday card.  All from the web.  And you couldn’t wait until you untethered yourself from every last installed application you had on all the various computers you owned and used.

Context, pt II:  From fixed on the ground to mobile in the cloud

So why the sudden counterrevolution to take us back to what we were so happy to get away from?  If all of those “old” applications were such a frustration, why are the new ones so appealing?  For the answer, we look to two of the big changes in the last few years in how we use computers and applications:  cloud-based services (typically remote data storage and computer processing at large scales) and mobile devices (aka “smartphones”).  And an additional “secret sauce” we’ll get to in a minute.

From the viewpoint of the user, you could say that most web applications succeeded because they took advantage of the cloud (even if it doesn’t meet the technical definition):  whatever you were doing happened on the web and the data you created was stored there as well, for you to access from anywhere.  The big change from old applications to web applications was this connectivity.  The new native applications have stolen the thunder from the web applications, because they use the cloud too.

The other big difference is that where we see most of the native application success is in the world of mobile devices:  the iPhone, Android phones, and now, in particular, on the iPad.  These are very different from most computers, because they are personal, in the sense that you actually take them with you.  Everywhere.  So the old application problem of not having it on the computer you are using goes away…because you are always using just that one device.

And the secret sauce of the new native applications?  It’s design.  “Traditional” desktop applications were horrendous looking things, both because nobody seemed to care, and also because there wasn’t enough computing power to make things look nice.  The evolution of the web brought in a cadre of people with design backgrounds while Apple raised interface design to a temple for worship, and now the new applications have adopted similar attention to design, and look nothing like the old systems we once abhorred.

Pick the right tool for the job

The only thing that all of this tells us, though, is that you can now make native applications that are really good, and succeed because of both what the web created (cloud services and nice design) and because of a whole new paradigm in mobile personal computing, not to mention the intrinsic virtues of a native application.

But is it the death knell of the web application?

I seriously doubt it, because there is a whole dimension of the question still unanswered:  what, exactly, is it that web applications “suck” at doing?  But it is perhaps easier to see this by looking at when native applications are better.

Native applications are fantastic when you (as a user) have a task which:

  • You undertake frequently and repeatedly (like checking e-mail);
  • Relies upon a set of services or contextual information specifically native to the device you happen to be using (think GPS location);
  • Has very complicated workflow or interaction behavior which benefits from a highly sophisticated design interface; or
  • Is performed primarily on a single, “personal” device like a smartphone, which you always have with you .

While there are certainly a large number of applications which fall into these categories, many don’t.  And for these types of applications, a web application is probably better, or at the very least, “good enough.”  And there isn’t anything wrong with that.

Serving up universal access

And let’s not forget one area where web applications vastly outperform native applications:  universal access.  This is a feature that can get easily overlooked by the new apostles of the native application.  It is certainly fantastic to be able to benefit from the vastly improved interfaces that some of these devices enable, but let’s not forget that these devices are considered a luxury for many, whether as individuals or as institutions.  One could argue that this could change (and it probably will), but the trailing edge of adoption is very slow.

One of the great benefits of the web is the level of access to services and information which has been granted to the public at large.  A public whose needs have been successfully served because of the lowest common denominator approach of standards-compliant web technology.  Fostering a new race to develop competing platforms significantly undermines our ability to serve a wide and general public, to serve those who happen to be on the trailing edge.

And ignoring their needs is what would really make the web suck.


I.  Sitting, Standing, and Walking

I recall a statement from a course in graduate school to the effect of “where you sit is where you stand,” meant to illustrate that your opinions are as much a product of where you happen to be (both physically and intellectually) as they are of conscious decisions which you make.  Of course we have a bit of a hand in deciding where we sit, though the ramifications of those decisions are rarely apprehended at the time, and the reverberations usually apparent only at a distance.  So I thought it was worth a brief tour through where I have spent my life sitting, to reflect a bit upon where I stand.

Some people are happy to find a place, an idea, a cause, or just a small garden plot that they can settle upon and tend to for long periods of time, perhaps even their whole life.  They are focused.  They often have a clear sense of what they are doing, where they want to go, and what they want to achieve, which can range from idle nothing to the grandiose.

And then there are those who are destined to wander.  And I am, most certainly, one of them.  I find myself fascinated, in a weird, out-of-body anthropological way, by where I started, where I’ve been, and where I am now, and left curious as to where I’ll end up.  And I’d say that at almost no point would it have been easy to predict where I’d arrive in the future, which makes it interesting to contemplate what today’s future holds.

II. In the (now) Distant Past

I started my own life in Colorado, but returned to my parent’s home state of Washington before any memories of the Rockies were formed.  Undergraduate life was a four year journey to Southern California, followed by graduate school in Boston, which became an unexpected home for a dozen years.  And now, I’m back in Washington State, bringing my children in closer range to one set of Grandparents (the others, and my wife, are from Houston).  While not a military rotation by any means, it certainly has stretched across distances, and been coupled with a good bit of travel.  I remember an early trip when I was seven driving from Washington to Washington (DC) and back.  My own travels include the typical European Grand Tour destinations, but also the icy obscurrity of Greenland, the virtually unheard of Faroe Islands, and an extended trip on a Coast Guard Icebreaker north of Alaska (well, perhaps there’s a bit of a pattern to my adventure travels).

My own academic and career life was long-focused upon the natural sciences, with an early interest in astronomy to a more lasting fascination with physics, until my encounter with real physics, when I became a chemist as an undergrad.  I was an unconvincing scientist (Caltech rejected me on the basis, I’m certain, of having not competed in Science Fair one year to focus on debate, though that’s probably why I actually fit in at Harvey Mudd).  So the idea of life on a lab bench never had a strong appeal, which sent me into a graduate program for Technology and Policy at MIT.  Though really those terms should have been left in lower caps, for the first question on my arrival was what type of technology policy I studied. Energy? Transportation? Environment?  Two years latter I’d not found the answer, though my thesis title implied it was something to do with the environment.

My post-graduate quest for a career became an amble as well.  I didn’t want to move to DC to do policy, and the local environmental policy firms could obviously recognize my lack of interest.  I was offered a job as a management consultant, but after being told that upon completing a big engagement you usually got Saturday off, I declined, and in crisis found myself walking a mile from the subway into the cold, early morning sun to take the Naval Officer Candidates Exam, though pursued nothing beyond that.  And so I wandered through a few temp positions, including staffing one of the IT support centers at Harvard, which became the first permanent job after my MIT graduate degree.

III.  To Return From Whence I Came

Despite owning a Timex-Sinclair 1000 when I was ten, learning how to program shape tables in Apple assembly language, and always being near computers, I never wanted to “work” with computers.  But computers found me, and that has been my career.  As I was obviously not the typical helpdesk staffer (though to be honest, most entry-level administrative positions at Harvard are staffed by ridiculously overqualified people), I took a variety of positions in Harvard’s Central IT group, managing the helpdesk I started with, working with clients of the main server support group, and eventually helping to run a large internal web development and support group.  And so I guess I’ve come to embrace what I have long been engaged with, but spent a long time ignoring.

And now?  Well, a few small people to raise and a very busy life away from any family became too much for my wife and I, and so we moved to…Spokane, Washington, where my parents and sister live.  So part of my current job is to take care of the children, until such time as I can find a creative way to make a living and pay someone else to take care of them.  My other activity is helping my wife to run our cloth diaper retail business.  Yes, I am not only an ex-chemist turned IT manager, but the Kingpen of Cloth Diapers.  But that’s for a later discussion.

So now I sit in the basement of my house in Spokane, typing out this trot through my past, wondering if any of this reflection will help to paint the future.  Perhaps.  It has been a fun little journey, and a reminder of the strange and interesting things to which I’ve been exposed.  Yet the objective is not to indulge in the past or lean upon stale recollections.  It is to open up thought upon where I have sat, so that I might figure out just where I stand, and towards where I should make my next peregrenations.  Onward!

Squeezing the sponge

I am acquisitive. But not in the usual sense, as it isn’t gadgets, gear, or money that I like to collect and horde. It’s ideas, information, and experiences which I collect. Much of it is useless on its own, but unlike the collections of things which accumulate in garages, closets, and bank accounts, I feel that collecting all of these abstract hoardings together brings value in the connections and relations which can be made between them, and that many new ideas, thoughts, and creative expression can result.

But it hasn’t really happened. Much as with all of the things that get squirrled away and forgotten, my own thoughts and inspirations gather their own form of dust and never see the light of day. Which is unfortunate, because I actually think I have a lot of interesting experiences and knowledge which can be mixed together to produce something exciting. I just need to actually make the effort to do it. I just need to take all that I have absorbed through the years, from reading, travels, experience, and daydreaming, and do more than acquire.

And so it’s time to squeeze the sponge. It’s time to do a bit of mental exercise, to crystallize the daydreams into coherent form, to actually make use of what I learn instead of sending it off into the archives for it to moulder. It isn’t the most novel of undertakings, nor am I convinced that it’s of much use to anyone but myself. But I KNOW that it is of use to me, and that taking the time to do more than just read and ponder is critical if I’m to move my own life forward. And if it happens to pique the interest of a few other people along the way, well, it’s always nice to find the pleasant surprises and enjoy the unexpected.