SaaS 2.0 time for an Uncloud™ approach

SaaS has been around for a while so now it’s important to start thinking about SaaS 2.0.  SaaS 1.0 allows applications to be run and maintained by a SaaS provider in a remote data center on behalf of your organization.  This gives your business users the agility to quickly obtain the benefits of these new software packages while reducing the requirement of your IT build and maintain expertise needed to manage the applications.

However as organizations change, grow rapidly, merge, or divest they require even more agility.  They need to be able to seamlessly move data in and out of, and integrate across these SaaS applications.  This is exacerbated by the number of different SaaS solutions organizations adopt.  Depending on the data movement, privacy requirements, and the IT skills of an organization where the actual software is deployed matters.  Often it can make more sense to deploy inside your network where you have more bandwidth and lower latency.

Currently there are advocates to take a hybrid cloud approach which marks the beginning of this. The hybrid approach is really about combining traditional on premise and cloud solutions. However by using available automation for deploying and maintaining systems as well as current containers like Docker SaaS vendors can maintain their software within your network. This is really starting an UncloudTM  approach that I see defining SaaS 2.0 going forward.  With an UncloudTM  approach SaaS 2.0 vendors will maintain and manage applications that may be in the cloud or in your data center, and applications can be migrated freely between the two locations as needed.

With an UncloudTM approach you will gain the advantage of having your different SaaS products in the same network enhancing connectivity and allowing vendors to manage the software.  Allowing you to decide of what SaaS vendors to use and separately decide where to have your data.

Top ten truths about data projects

 

CIO-3

#10
Money is like data if you invest it, manage it and protect it well it can pay off immensely. But do any poorly and you’ll regret it.

#9
Development methodologies keep changing… mostly in name.

#8
The only thing more expensive than free software is free software implemented by the lowest bidder.

#7
Master Data Management is a transitional state until you get to the fully integrated environment… And once you’re there you’ll need to add another system.

#6
Big data is incredibly valuable, unless someone forgot to govern it.

#5
Agile is great, but knowing your real requirements is better.

#4
If data governance is painful, too slow or too costly its being done wrong.

#3
Choosing the lowest cost integrator is like choosing the cheapest plumber…  Once they’re done it looks great!!The flood comes later…

#2
Data is great but like a teenager it has a tendency to just sit there; it really can be useful at least when it’s finally in motion.

#1
Business logic and data handling are like two parts of epoxy; once they’re mixed you are stuck for a long time.

 

Zero Wait Information ≠ Real time

Zeor_Wait_Single_in_Crowd

How to Choose the Right Data Movement:  Real-time or Batch?

We all want a “zero waitinfrastructure.  This has spurred many organizations to push all data through a real-time infrastructure.  It’s important to recognize that “zero wait” means that the information is in ready form when a user needs it, so if the user needs information that includes averages, sums, and/or comparisons, there is a natural need to have a data set that has been fully processed (e.g., cleaned, combined, augmented, etc.).  Building the data infrastructure with this in mind is very important.

The popular point of view is that real-time processing is the “modern” solution and that batch processing is the “archaic” way.  However, real-time processing has also been around for a long time, and each mode of processing exist for different purposes.

One trade-off between real-time and batch processing is high throughput versus low latency.  Choosing one process over the other can be somewhat counterintuitive for the broader team, so it is important to determine what the throughput and latency requirements are, independently of each other.  A great example of throughput versus latency is the question, “What is the best way to get from Boston to San Francisco?”  You might answer, “By plane.”  That would be true for transporting a small group of people at a time as that would result in the lowest latency, but would by plane be the best way to move a million people at once?  How would you get the highest throughput?

Real-time processing is very good for collecting input continuously and responding immediately to a user, but it is not the solution for all data movement.  It’s not even necessarily the fastest mode of processing.  When deciding whether data should be moved in real time or in batch, it is important to define the nature of the business need and the method of data acquisition.