Databricks joins Amazon and Google for providing Big Data Analytics on Cloud

Today at Spark Summit, Databricks CEO, Ion Stoica , announced its first product that is DatabricksCloud. This was one among two important announcements from Databricks today. The first one was that Databricks got series B round of funding 33M from Andreeseen Horowitz. But IMO the availability of DatabricksCloud is more significant.

A few months back, I had an opportunity of have teleconference with Ion. During the discussion, while talking on business model, he emphasized more on Databricks’s Spark certification service. He was (rightly though) ambiguous on other developments and business model. After the call, I discussed with a colleague about a possible product around IDE for programmers and data scientists and monitoring of Spark clusters. But what we saw today was much better.

Databricks Cloud has put Apache Spark on Cloud. Big Data on Cloud is not a new thing. However, what Databricks has provided is an interactive, SQL based Web tool for Data Scientists to play with data and visually see the output in different forms like trends, charts, etc. It also provides a powerful WYSIWYG dashboard builder.

With making Spark and Spark streaming available on Cloud, Databricks joins Google and Amazon, both of them have streaming services with analytic stack available on cloud for building real time analytics and dashboard. However, the key difference is that Amazon and Google built those services for programmers to write streaming applications. In contrast, Databricks Cloud is more suitable for Data scientists.

Databricks Cloud has web based interactive tool , called Databricks Notebook. Though the details of the technology it built on is not yet available (in fact Databricks website is still silence on the announcement), the concept and look & feel is astonishingly similar to ipython’s Notebook. And name is similar too!! Is it reusing the Rich client from iPython’s Notebook? Of course, it also seems to be different. A few big differences are: the Databricks Notebook heavily demonstrates the power of using SQL on data. It also make the power of machine learning available to the data scientists. Anyway, looking forward to get to see more details about the service, and pricing.

Anyway, for last couple of months, I was exploring a business viability of Spark Analytics as a Service on Cloud. It just got killed! Good that it happened earlier than later 🙂

Incompatibility of DataLake Big Data Architecture with Internet Of Things (IoT)

Social, Mobile and application logs have been the three main contributors to Big Data. To handle such data, many stakeholders and experts are promoting a Data Lake Architecture in which the data collected is brought to a central platform where it is stored, processed and made it available for interactive analysis as well as for applications.

 

Recently Internet Of Things(IoT) joined the party.  It is clear that IoT would take this the data explosion to the next level. As per a report there will be 26billion IoT devices installed by 2020. Wearable devices, sensor devices, home appliances, etc would be producing huge machine data.

 

The synergy of IoT and Big Data is obvious. However, there is an incompatibility too, especially when it comes to promoting Data Lake architecture to IoT. The IoT data would be structured data continuously streamed in but not all data tupples would be interesting enough for a long term storage. The machine generated data would not only be huge, but its utility to store centrally would be limited.

 

There are two types of processing would commonly be done on IoT data:

1. Actionable processing: often, locally and immediately

2. Batch processing for patterns, aggregates and learnings : Both locally and centrally.

 

The local processing would need to be done to identify any anomaly or changes that detecting a change of state of the observables. For example, machine failure or in case of Google’s Nest , a departure of home people indicating a need of stopping temperature control or switching off lights. Other type of local processing is to learn patterns from the usage. However, there is not much value to bring the entire data centrally to process.

 

Hence, IoT data processing architecture would need to have a combination of local ‘smart’ processor as well as a central Big Data processor.  In this case, the data from all local sensors can be processed by Local processors and send aggregated data, insights and, in some cases, sample data to central Big Data platform. The processors on edges can collect data from local sensors, process them, learn from them and if needed act on the insights.

 

In this architecture, the central big data platform would receive processed and some sample data from all local processors at some interval. The central Big Data platform would further do aggregation, pattern recognition as well as machine learning from the data from all the sources. Such processing can not only identify and notify local anomalies but can detect and learn from patterns in the entire network. The central processing would be able to identify clusters of behavioral patterns and trends. Some of the insights identified by the central processing would also be useful for edge processing particularly to compare the local behavior with global behavior.


Hence, IoT architecture would involve ‘intelligence’ data processing both of on local networks where data will be small as well as centrally where the data will be huge not because the entire data is coming to it but because the data would be coming from large types of small networks. A key thing to remember is that local processing may be working on a small data but it would still be intelligent processing. Google’s Nest is a classic example of such processing.

Google DataFlow Service to fight against Amazon Kinesis

One of the main announcements, other than wearables and AndroidL, at Google IO conference yesterday was the availability of streaming analytics service called Dataflow on Google cloud. This service will enable application developers to quickly assemble a real time high volume data ingestion and processing pipeline and make it scalable using Google Cloud infrastructure. Google Dataflow combines Google’s internal Streaming engine called Millwheel with easy to program big data processing abstraction called FlumeJava. The Google Dataflow is compatible to Google BigQuery so that the output from DataFlow can fed to BigQuery. Using this pipeline stack, building applications like  real time analytics and dashboarding will be easier.

Google has published papers on both of these technologies a few years back. However, google had not open sourced it.  As in the case of other Google’s papers, these papers also had triggered development of similar technologies outside of Google. For example, Cloudera developed Crunch which is based on the concepts from FlumeJava and then open sourced it to Apache. There is an another project called Puma on the same concepts.

 

On streaming side, there are various other opensources that have come up. Twitter has open sourced its streaming processor called Storm. Amplabs’s (Berkeley University), now popular opensource,  Apache Spark has a streaming as one of the important components. Yahoo open sourced S4 to Apache and recently LinkedIn also open sourced Samza which is based on Kafka, a distributed messaging that LinkedIN had earlier open sourced. Sometime later I would compare these streaming technologies on blog on my texploration blog at wordpress.

 

However, though all these technologies is to solve the same real time stream processing problem that Google Dataflow is solving, none of them are cloud services. That is where Amazon comes in the picture. Google is fighting this war in Cloud.

 

Back in December 2013, Amazon made available an AWS cloud service called Kinesis for real time streaming data processing. This service makes it quick to assemble application to process massive streaming data that various mobile games or sensors generates. It is cost effective, just 2.8 cents per million records to digest! Of course, Amazon earns money not only from processing but storage and further processing and using of the data. The service can be used for dashboarding as well as real time processing applications.

 

Google is yet to price to the service. I am not sure whether it would be based on data being processed or I am sure it will be in the same range that of the Amazon.


However, the main obstacle both of them will have is that both of these technologies are not opensource. Hence, it will be a one way entry to customers using it. Vender locking ! Hence, the biggest competition to them would application developers using Spark or Storm etc. on EC2 or Google cloud. MetaMarket had similar problem which it tackled by opensourcing Druid (BTW though Druid can be used for Real Time processing and dashboarding, its architecture and programming model is different than most of the above). I hope Google open sources MillWheel too.

Big Data and Evolution in Storage

Big Data has been disruptive movement that has caused a disruption in storage world. This big data Disruption has created a big opportunity for innovation and profits for storage industry and thus giving birth to many startups.

One of the main obvious thing about Big Data is the big data needs a big storage. And the performance of the big data processing depends on the data storage and data movement.

There are three well discussed key data related aspects of big data:

1. Stored on commodity hardware including storage

2. Data locality: One of the key architectural aspect is to move processing code to the data instead of moving data to the computing node.

3. Replication based fault tolerance. A typical replication factor is three that results into a need of three times of storage space.

These three principles of Big Data have caused a need for challenges that needed to met with innovation. Obviously, established vendors reacted to this with two approaches, in-house R&D innovation as well as acquisitions.

Typical problems associated with the data storage and movements are: Size, Access speed, Data movement pipe, etc. The Size problem is dealt with optimized compression, de-duplications. For example, to deal with a problem of volume of data, Dell had bought Ocarina which does storage optimization with compression and de-duplication.

The storage access performance dealt with faster media technologies SSD, flash etc. With the active interest in various in-memory databases like SAP HANA, as well as computing like Spark that heavily dependent on memory, there is an active interest in SSD and Flash based memory. Last month, May 2014, EMC acquired a privately funded DSSD which makes a rack scale flash storage which is better suitable for IO intensive operations like in-memory database. EMC has invested in this startup early on. Some notable startups in the area of Flash based technologies are iSCSI, Nimble Storage, Amplidata, VelocityIO, Coraid, etc.

Many accelerators or faster access pipes/switches deal with data movement issue. A couple of years back Netapp bought CacheIQ which was a NAS accelerator specifically for caching. Last year Violin Memory acquired GridIron which is a flash cache based SAN accelerator.

To take this further, innovative startups like Nutanix, Tintri, etc. are providing software defined (virtualized ) storage. These startups are quickly followed by the existing players. In March this year, VMware announced VSAN, virtual SAN, based on the principles of Software Defined Storage. Earlier EMC has also acquired ScaleIO.

Hadoop’s HDFS itself got enhanced to take advantage of these variety of storage types. HDFS 2.3 release (April 2014) has been significant in this respect. From this release, HDFS has a support for in-memory caching and heterogeneous storage hierarchy. We will continue to see innovation in storage technologies. Choosing a right combination, configuring and managing it would be an important task for big data deployments.


BTW this post is not claimed to be a comprehensive survey of storage technologies or startups. The names I mentioned are just to give example to make a point. I have not covered all the players and the names I mentioned are not necessarily preferred choices.

(Also posted on LinkedIN)

Machine Learning on Big Data gets Big Momentum

Big Data without algorithms is a dumb data. Algorithms like machine learning, text processing, data mining extract knowledge out of the data and makes it smart data. These algorithms make the data consumable or actionable for humans and businesses. Such actionable data can drive business decisions or predict products that customers most likely to buy next. Amazon and Netflix are popular examples of how the learnings from data can be used for influencing customer decisions. Hence, machine learning algorithms are very important in the era of Big Data. BTW in the field of Big Data, ‘Machine learning’ is considered more broadly ( than what it is really meant by the machine learning professionals) and includes pure statistical algorithms as well as other algorithms that are not based on ‘learning’s.

Earlier today, on 16th June, Microsoft announced a preview of machine learning service called AzureML on its Azure cloud platform. With this service, business analysts may easily apply machine learning algorithms like the ones related to predictive analytics to data.

Machine learning itself has been popular for last few years. Microsoft has recognized the trend and jumped on it. When it comes to big players making machine learning services on cloud, Google had pioneered its PredictionEngine as a service on cloud few years back.

Traditionally data scientists use tools like Matlab, R, Python (NumPym, SciKit, Sklearn) and others for analyzing data. Programmers use open sources like Apache Mahout, Weka for developing Application services using Machine Learning algorithms. However, having machine learning algorithms is not good enough, scaling the machine learning algorithms to big data is very important.

Last year Cloudera did an acqui-hire, Myrrix, and open sourced Machine learning on Hadoop as Oryx. Berkeley’s Ampslab has opensourced its Big Data Machine learning work, called MLBase, in Apache Spark, an open source big data stack becoming rapidly popular.

The momentum in machine learning has already fueled a good amount of venture funding in this area.

  • SkyTree got $18Million funding from U.S. Venture Partners, UPS and Scott McNealy.
  • Nuotonian grabbed $4 million Atlas Ventures for Big Data Analytics.
  • Another startup wise.io raised $2.5 from VCs led by Voyager Ventures. Wise.io would makes it easy to predict customer behavior using machine learning.
  • AlpineLabs that came out of EMC raised series B last year from Sierra Ventures, Mission Ventures and others. It provides a studio and easy to assemble set of standard Machine Learning and analytics algorithms.
  • Oregon based BigML raised $1.2 million last year to provide easy to use machine learning cloud service.
  • RevolutionAnalytics which got $37 (in total) makes R algorithms to work on Map Reduce.
  • and the list goes on

There is an interesting Machine learning project called Vowpal Wabbit that initially started at Yahoo and continued at Microsoft. However, Interestingly, instead of VW, Microsoft is making R language and algorithms available on Azure Cloud.

Anyway, the trend of making machine learning services easy to run on Big Data and on Cloud would continue. But having the tools and algorithms available would not enough to solve the problem. We need qualified people who understands which algorithms to use for solving which cases and how to use them (parameterize). Moreover, what we really need is applications using such algorithms to solve the business problems without even having a need for users to understand the algorithms. In my opinion , what we would see in future is such vertical applications / services that would abstract (use but hide) machine learning or prediction algorithms to serve domain specific business needs.

Facebook open sources Presto for interactive query over peta bytes of data (web scale)

Facebook today announced Presto as a distributed SQL query engine optimized for ad-hoc analysis at interactive speed. Facebook developed even though Hive provides a query interface over big data because of a need of having lower latency interactive queries over web scale of data that Facebook has. By looking into the results, it seems that Facebook engineering has successfully able to accomplish this mission . It is deployed in three (may be more) geographical regions with scaled a single cluster to 1,000 nodes. This is being used by 1000+ employees firing more than 30,000 (up 3000 since June when Presto was first time revealed at WebScale event) over petabyte of data.

Image

 

So, what does it do differently?

1. Optimized query planner and scheduler firing multiple stages

2. Stages are executed in parallel

3. Intermediate results are kept in memory as against persisting on HDFS thus saving IO cost

4. Optimized Java for key code directly generating optimized byte code.

More importantly, when contrasting with Hive, Presto does not use MapReduce for query processing.

One thing I liked about Presto is that it is built on a pluggable architecture. It can work on with Hive, Scrub, and potentially your own storage. That opens up a good opportunity for its adaption. Of course, we need to compare this with Impala from Cloudera.

In the Web Scale conference at Facebook menlo Park office back in the June, it was told that Presto would explore probabilistic sampling for quicker results at error (concept that BlinkDB implemented). I am not sure where is it, however, BlinkDb already supports Presto in addition to Hive and Shark.

 

Code and docs:

http://prestodb.io/

https://github.com/facebook/presto

My Sessions in Recent Conferences in June 2013

This month I spoke in two sessions:At JAX Conference on June 5th at Santa Clara Convention Center, I spoke on Java PaaS Cloud platforms. I covered GAE, Amazon beanstalk, CloudFoundry, CloudBees, OpenShift, Heroku and Oracle’s PaaS platform. Here is the slide deck I presented.

 

Later on 10th I spoke at University of Washington, Seattle on Big Data Ecosystem and Application Architectures. In this conference, I covered more on Hadoop Ecosystem and NoSQL based MapReduce.  I could only briefly mention about  Storm, Impala etc for NRT / RT Querrying. May be topic for a separate session.

I will be speaking in NoSQLNOw in Aug on Data Modeling using NoSQL Databases. I would be mainly covering User profiles and Real Time analytics.

http://nosql2013.dataversity.net/sessionPop.cfm?confid=74&proposalid=5551

Data on BigData

According to Transparency Market Research’s
  • Cumulative Ave Growth Rate (CAGR) of Big Data projected to be 40% from 2012-2018
  • the global big data market was worth USD 6.3 billion in 2012 and is expected to reach USD 48.3 billion by 2018
  • Big Data tools : CAGR of 41.4% from 2012 to 2018
  • Storage CAGR of 45.3% from 2012 to 2018
  • Major players (by revenue) last year HP Co.Teradata, Opera Solution, Mu Sigma and Splunk

DRILL: A New Project added in Apache Incubation for Low Latency Query on Large Data Set in Hadoop

Just like Google’s Map Reduce and Google FS papers had been a basis for Hadoop’s Map Reduce and HDFS respectively, another paper from Google has become a basis for new project in Apache. A project named DRILL has been recently submitted to Apache mainly from developers from MapR. This inspiration for this project is Google paper on query language, DREML, on a very large data data set. DREML has been a basis for Google BigQuery for a while.

This project would provide a ‘low latency’ query language on HDFS. Hadoop is a great platform for Big data. It is more suitable for offline / batch processing based on MapReduce pattern. However, many customers need a way to make a real time query on the data residing in the hadoop / HDFS. DRILL will address the need.

This project has just started and a first code is yet to be contributed. However, this will be an important addition to Hadoop ecosystem. This will co-exist with Hive which also provide a query access.

 

GITPRO World 2012 : Best Conference for Technology Professionals and Entrepreneurs

22 Jan 2012,

Cupertino, CA, USA

http://www.gitpro.org

 

Global Indian Technology Professionals Association (GITPRO) is hosting a conference on “Emerging Technologies and Opportunities for Professionals and Entrepreneurs” on 18th Feb 2012 at Palo Alto, CA. With three parallel tracks focused on Technology, Career & Leadership and Startups, this conference is best suitable for everybody from technology to entrepreneurs.

 

Iconic serial entrepreneur and entrepreneurship coach at Stanford University, Steve Blank would be delivering a keynote. The CEO of Persistent Systems, Anand Deshpande, would be delivering keynotes at the conference.

 

The Technology track is full of experts on Big Data, Hadoop, Cloud, Mobile and Social Computing. They are coming from Greenplums, Cloudera, HortonWorks, Microsoft, IBM, ThisMoment, AdMaxim, and GloMantra.

 

The Startup Bootcamp at GITPRO World 2012 would cover everything that an entrepreneur should know from launching a startup to a successful exit. Successful startup entrepreneurs, VCs, sales & marketing executives would be guiding aspiring entrepreneurs.

 

The GITPRO World 2012 has sessions specially focuses on career and leadership related topics covering various aspects like managing with influence, evolving from individual role to manager and leader, mid career accelerators & Mid-Career Switch and job opportunities in 2012.

 

This event would provide a unique opportunity for Indian Technology professionals for networking with fellow professionals, technical experts, industry leaders, entrepreneurs, career coaches, and VCs.

 

GITPRO is a global networking platform for Indian Technology Professionals for their professional and self-development and their contribution back to the profession, society, and people of US and India. GITPRO, started in 2009, has chapters in Silicon Valley, Contra Costa Valley, Seattle, DC, Denver in US and Bangalore, Hyderabad, Pune in India.