CAP equivalent for analytics?

CAP theorem deals with trade-off in transactional system. It doesn’t need an introduction, unless of course you have been busy on the moon for last couple of years. In this case you can easily Google for good intros. Here is a wikipedia entry on the subject.

I was thinking how would I build an ideal analytics system. Quickly came realization that all “care abouts” cannot be satisfied simultaneously, even assuming enough time for development. Some desirable properties must be sacrificed in favor for others, hence architectural trade-offs are unavoidable in principle. I immediately had déjà vu regarding CAP. So the following is my take on the subject:

SVLC hypothesis regarding architectural trade-offs in analytics

I haven’t came to rigorous definition yet, here is an intuitive one:  Current technology doesn’t allow implementation of a single analytics system that is SVLC which is simultaneously sophisticated, high-volume, low-latency and low-cost .One of these four properties must be sacrificed, the extent to which it is sacrificied determines the extend to which other properties could potentially be implemented.

Deep dive for the brave souls

Let’s reiterate the desired system properties first (see ideal analytics system):

  1. Deep Sophistication => …free-form SQL:2008 with multi-way joins of 2 and more big tables, sorts of big tables and all the rest of data heavy lifting.
  2. High Volume => …handling big data volumes, Let’s cap it 1PB meanwhile for easier thinking.
  3. Low Latency => …subsecond response time for the query on average. A more concrete description is that latency must be low enough to allow analyst working interactively in conversational manner with the system.
  4. Low-Cost => … I’ll define it as commodity hardware and software must not exceed hardware costs. More rigorously? $1/GB/month for actively queried data is my very rough estimation for low-cost.
  5. Multi-form => any data, relational, serialized objects, text etc….
  6. Security => can speak for itself

I found that multi-formness and security doesn’t interfere with implementing the rest of properties and can in principal always be implemented in satisfactory way without major compromises. Some nuances exists tough, but I’ll ignore them for clearness. So removing them and getting the following list:

1. Sophistication (deep)              => S

2. Volume (high)                          => V

3. Latency (low)                           => L

4. Cost (low)                                  => C

These four are highly inter-related and form а constraint system . Implementing one to full extent hampers the rest. Let’s see what trade-offs we have here. Four properties that is 6 potential simple 2-extremes trade-offs. Let’s settle on geometric tetrahedron to model the architectural trade-off space. Four properties correspond to four vertexes and six trade-offs correspond to six edges. Then we model particular trade-off by putting a point on the corresponding edge. So we get something like this:

Okey, so far so good. Now, I’ll try to be а devil advocate and challenge my point that any trade-offs are necessary in the first place. So let’s review the system denoted as

SVLC=> high-volume, low-latency, deep analytics, low cost

Because it is low-latency it will need I/O throughput adequate to scan whole dataset quickly and since it is high-volume (see above for quantitative definition) meaning aforementioned dataset is big, it will need a large number of individual nodes in cluster to provide the required aggregated adequate I/O throughput. The number of machines is further increased with low-cost requirement meaning that simpler servers that are in mainstream sweetspot must be purchased. Therefore system becomes extremely distributed and data being dispersed all over it. The low-cost networking usually mean TCP/IP that is high-overhead, high-latency and low-throughput. Deep Sophistication analytics requires performing complex data-intensive operations like full sorting of big datasets, joining big tables or just simple select distinct over big data that will inevitably have long latencies. Once latency is long enough that probability of node failing mid-query become non-neglectable. Latency increase becomes self-perpetuating because of required finer grain of  intermediate result materialization. This is needed to prevent never-ending query restarting and provide a kind of resumable queries. Not other solutions to resumable queries are documented except MapReduce-style intermediate result materialization. This ultimately makes latency batch-class long violating low-latency requirement.

I guess my proof miss the required rigor to be considered seriously by academics, I’m just an engineer :) I love to see it reworked to something more serious tough. I just hope to get the point across and to be of value to engineers and practicing architects.

Anyhow this is the base of my hypothesis showing that it impossible to achieve full SVLC using today technology.

Let’s consider other cases where we give up something. It is easy to visualize such trade-off as a 2D plane dissecting tetrahedron. The three points were three edges are cut corresponds to three trade-offs. For simplicity I’ll elaborate only radical trade-offs in this post. Radical trade-offs are those were on all six trade-off edges one extreme is selected and this corresponds to putting a trade-off point on one of vertices. Most real-world system make temperate trade-offs that corresponds to the plane that dissects the tetrahedron into sizable parts. Moreover real-world system, especially available from commercial vendors, are a toolbox kind of a system. Meaning that system consists of a set of tools where each one makes a different set of trade-offs. Then it is up to engineer to choose the right tool for the job to the toolbox. However, toolbox approach is not a loophole for this hypothesis, because properties of the different tools don’t add up in desired way. For example the simultaneous use of expensive tool and low-cost tool is still expensive; the simultaneous use of low-latency and high-latency is high-latency. Nevertheless, toolbox approach is best one for real-world problems. Because real-world problems are usually decomposed to a number of sub problems where each may require different tool.

Well…back to the radical systems… Let’s consider all four cases where we completely give up one property to max out the rest three:

SVL => high-volume, low-latency, deep analytics …giving up Cost… seems to be implementable. In its pure form it reminds classic national security style system. Subsecond querying petabyte-scale dataset with arbitrary joins. Heavily over provisioned Netezza / Exadata / Greenplum / Aster and other MPP-system could do it I believe. Data kept in huge RAM or on flash, huge I/O is available to scan the whole dataset in matter of seconds. High-speed, low-overhead networking is available to with huge bi-section network bandwidth capable to shuffle the whole dataset in matter of seconds. Infiniband/RDMA are the best probably. How bad Cost can be here? Well… unhealthy to imagine. Throw some numbers in anyway? Will do some back of envelope calculation in my future posts.

SVC => high-volume, low-cost, deep analytics …giving up Latency… seems to be implementable, in fact it is MapReduce territory, Hadoop natural habitat. Are ETL systems SVC? I think no, because while they given up Latency they haven’t kept on Volume. How bad is Latency? well… forget interactivity, create queuing system and get notified when the job is done. If too slow add servers. If some interactive experimentation is needed, use VLC first to develop and prove your hypothesis and only than crunch the data with SVC. Since cost is involved I guess Hadoop MapReduce is really a king here. Tough if Aster licenses for example are comparable to commodity cluster overall cost and is not many multiples of it, then it could fit the category nicely. Otherwise it will make suboptimal (considering my model context not in wider sense!) great SV system. The great MapReduce debate is not for nothing!

SLC => low-latency, low-cost, deep analytics …seems to be implementable in a minute, just start your favorite spreadsheet application ;) You will be shocked how much data Excel crunches in just few seconds, nowadays. Most traditional BI tools are in this category too. Heck, if not for BigData, the analytics industry will be as would become as exciting as enterprise payroll systems. Though, innovation is possible even there.  Heck, 99% of BI is fully feasible to be done completely  in-memory, often on single server and the deployment must be really low-risk low-cost very-rapid if done correctly. Most cloud BI vendors are also in this category. “R-project” is here too. This was Kickfire beloved spot as well as is now for QlikTech & GoofData, PivotLink and etc… So pretty much all BI vendors are here except MPP heavy-lifters. How bad Volumes are limited? Well with CPU-DRAM bandwidth being 50GB/sec and DRAM sizes 64GB on common commodity servers I think crunching few tens GB should be well possible in matter of seconds, if not for implementation sloppiness, and with literally pocket money (average enterprise’s pocket not mine… yet).

VLC => high-volume, low-latency, low-cost …giving up Sophistication…seems to be implementable, that is doing a simple scan and giving up the Sophistication, particularly joins. Dremel and BigQuery seem following this approach. How bad is giving up Sophistication? Well, it all depends on how pre-joined/nested dataset is. With normalized schemas, well… unavailability of joins makes it pretty much impractical implement any usable analytics. However, with star-schema and particularly nested data (with some extensive pre-joins even if it means some redundancy), this can work wonders to vast majority of queries, completing them in seconds on even large datasets. However, no pre-join strategy will work for 100% of queries and functions like COUNT DISTINCT must be approximated when run over big dataset like described in Dremel paper. Also I’ll assign sampling strategy to this category, because sophistication also means accuracy here. One clarification: only joins of several big tables are sacrificed here, joins of big table with even large number of small tables are perfectly okay and done on the fly during the scan. Sorts of big table before it was reduced significantly to manageable size is also sacrificed in this approach, however approximation algorithms can be used for this and then it will be okay too.

Hence the conclusion: only 3 of 4 SVLC properties can be implemented in full extend in single analytic system. The hypothesis goes that any attempt that allegedly violates it, in fact either is no a single system or impairment is latent in one or more properties.

[TODO: rewrite] The extended hypothesis for fractional cases:

  • Systems/trade-offs may be radical or temperate. Radical trade-off completely gives up one of four properties of the system. Temperate trade-off gives-up the property only fractionally on expense of giving-up other properties also fractionally.
  • Most real-world systems are complex. They are a set of tools, where each separate tool is a concrete trade-off. Then the user of such system can use different tools with different trade-off sequentially or simultaneously. This may seems as way out of the restraint; however it is not, because properties of separate tools don’t add up in desired way. For example the simultaneous use of expensive tool and low-cost tool is still expensive; the simultaneous use of low-latency and high-latency is high-latency.  Nevertheless, toolbox approach is best one for real-world problems. Because real-world problems are usually decomposed to a number of sub problems where each may require different tool.
  • Most often trade-offs of real-world systems are temperate.

Analytics Patterns

Unsatisfied by my previous post‘s Advanced Analytics definition and giving it a thought of what is advanced methods in analytics I realized that analytics industry miss a good analytics pattern catalog. A list of common problems followed by a list of common industry-consensus solutions to them. An equivalent of GoF design patterns to analytics. The list, where each list item starts from brief description of common recurring analytics problem and follows by elaboration by commonly accepted solutions to this problem followed by mandatory example section illustrating the solution using widely available tools.

Software engineers stolen this idea from the real architects (those dealing with a concrete structures not an abstract ones ;) ) 15 years ago.  They haven’t avoided initial short period of mass obsession and abuse of the concept… who does?  But eventually it worked out quite well for them us. I wonder if analytics industry could leverage these experience and create a catalog of some 25-50 most common patterns. Pattern descriptions in a catalog not to exceed few pages and number of patterns limited to few tens, making it wide industry adoption feasible.

What you think? Any ideas? I’ll try to make a first step by dumping patterns from my head right now (it is by no way a finished work):

I’ll call it analytics patterns:

1. Predictive Analytics. That was the easiest for me. I was involved into it for the first time some 12 years ago and developing what is now The system was used mostly to forecast sales taking into account an array of causal factors like seasonality, marketing campaigns, historical growth rates and etc. The problem is that there is a lot of time-based historic data available and it is required to forecast future values in the context of given historic data. The basic mechanism of implementing Predictive Analytics is to find or less preferably to develop a suitable mathematical model that can model closely (but be  cautious about overfitting) existing data, usually a time-series data and then use the model to induce forecasted values. In simple terms it is a case of extrapolation. Correct me if I’m wrong. As it was the case in 90-ties I’m pretty sure it is the case now, that exotic hardcore AI approaches like neural networks & genetic programming are best kept exclusively for moonlighting experiments and as material for cooler conversation the next morning. With deadlines defined and limited budget it is best to stick to proven techniques to achieve quick wins. I think the value of working forecasting is self evident.

2. Clustering. Well not the heavy noisy one in a cold hall :) but the statistics sub-discipline called better cluster-analysis. The problem here is that a lot of high-dimensionality data is available and it is required to discover groups with similar observations in other words automatically classify them. It is implemented by searching for correlations grouping the records according to the discovered correlations. What it is good for? Well in simple terms it helps to discriminate different kinds of objects and observe the specific properties of each kind. Without such grouping, one would be able only to observe properties that all objects exhibit or alternatively go object by object and observe it in isolation.

3. Risk Analysis - particularly through Monte-Carlo simulation. It is not called Monte-Carlo because it is invented there :) it called so because of reliance on random numbers akin Monte-Carlo casinos. Random numbers are proved most effective way to simulate mathematical model with large number of free-variables. With advent of computers it became a whole lot easier than using the book.

4. Given telecom event stream, run events through the rules engine to detect and prevent telecom fraud in real-time. This is essentially CEP engine and usually implemented by creating a state-machine per rule and running the events through it. Special version of stream sql is used. Similar scheme can be used for real-time click fraud prevention.
5. Given serialized object data or nested data allow running ad-hoc interactive queries over it in BigQuery fashion.
6. Given normalized relational model, allow running any ad-hoc queries. For common joins create a materialized view to speed up joins.
7. Canned reports. I guess they are good also for some cases…….
8. OLAP/Star schema when to use? ……

What else?

Of course it is just a first step and to do it correctly it will be a project in itself, in form of a book most probably. However, as one Chinese proverb  goes “A journey of a thousand miles begins with a single step”.

Feature list of ultimate BigData analytics

  • Volume Scalability => the solution must handle high volumes of data, meaning the cost must scale linearly in the range of 10GB – 10PB.
  • Latency Scalability => the solution must be interactive or batch, and cost must scale linearly in the range of 1 msec – 1 week.
  • Sophistication Scalability => the solution must support simple summing scans or complex multi-way joins and statistics functionality and the cost must scale linearly in the range of simplistic scans to full blown SQL:2008/MDX/imperative in-database-analytics/MapReduce. Report/index viewing is not considered as analytics at all and particularly as not low-sophistication analytics. Report/index creation is analytics and can be of varied sophistication degree. ETL systems is considered as independent analytic systems.
  • Security => any unauthorized access to data must be prevented and in the same time, in-place data analysis (like predicate evaluation) must be possible and resource-efficient.
    • Keeping data always encryption and keeping keys always on client will not work. It will require shipping all the data to the client and is non-starter for big data analytics. So compromises must be made. The issue is especially contentious in public cloud setting.
    • If data is stored encrypted and is continuously decrypted in-place for predicate evaluation, for example, it means that keys must kept in same place (at least temporarily) and it compromises the whole scheme altogether, flooring its cost-benefit factor. The cost of decryption is pretty high.
    • De-identification of all fields may work; random scaling may be applied to numeric fields with subsequent query/result rewrite.
    • Security-by-obscurity methods and defense-in-depth approach may have good cost-benefit factors matching or exceeding overall security for in-house approach.
  • Cost => must have low-TCO that scales linearly to dataset size and the load factor caused by submitted queries. The breakdown (assuming cloud):
    • Storage component linear to dataset size. Economies of scale must bring this cost down significantly. Eventually it must be cheaper than on-site storage.
    • Computing component linear to load with infinite intra-query automatic elasticity. Guarantied elasticity may bear a fixed premium proportional to guarantied capacity. Minor failures of cloud component must not restart long running queries.
    • Bandwidth component. Fedexing hard-drives are by far the cheapest way to upload data, and then query results are really small. How much information human can comprehend instantly after all?
  • Multi-form =>
    • normalized relational
    • star-schema
    • cubes
    • serialized objects / nested data.
    • text
    • media
    • spatial
    • bio / scientific
    • topographical
    • and other data forms must be equally well supported and cross-queried.

Terminology: Analysis vs. analytics and more…

I see a lot of confusion in the usage of newer terms in analytics. I do confuse them myself occasionally. I find it funny that the industry as serious as analytics tolerates constant renewal of its basic terminology. Yet, I confess, I’m very guilty of it myself. I do enjoy the freshness and the novelty of newer terms even being fully aware that is fake by a large extent.

In this post I’ll take a step to clear the confusion on few most basic terms: analysis vs. analytics vs. BI and all their common derivatives.

The Spoiler (the quick answer):

Analysis is the examination process itself where analytics is the supporting technology and associated tools. BI is quite synonymous to analytics in IT context. Advanced Analytics, Business Analytics, Data Analytics, Analytics Software, Analytics Technology are almost always marketing pleonasms (redundant expressions) and can be safely substituted by just ‘analytics’. ‘Data analysis’ is yet another pleonasm. Compound expressions of these words such as ‘BI Analytic Technology’ are yet again pleonasms albeit of higher degrees. Some nuances exist tough and are elaborated in this post.

The deep dive for the brave souls:

Let’s attempt to properly define the terms and then carefully examine the alleged differences.

Before we dive in, a word of caution: definition by synonyms is wrong. It causes stack overflow in the mind of programmers. For example “analysis” => “critical examination” => “examination” => “critical inspection” => “inspection” = “critical examination” => “f…”=> “why I just don’t make myself a cup of coffee?”.

You can check what makes a good definition and common mistakes following……. Well apparently I haven’t found in a quick look a good material  on proper definition but for fallacies there is a nice wikipedia article. If you find a good article on what makes a good definition drop me a note / comment, if so it would include a definition definition.

Let’s start….

What is analysis?

Analysis is a pretty old, well understood term and essentially means “breaking down” or “decomposition”. More accurately – “the process of decomposing complex entity into simpler components for easier comprehension of aforementioned entity”. As a child I did a lot of it to the toys and electronic appliances around me. I challenge you to find a better and more concise definition than mine above (it is a matter of taste but anyway). Here is some links to save you time:

What is analytics?

Analytics is a newer term related to analysis and looking it up will usually only add to confusion since definitions vary and are fuzzy and seems to be context-dependent. Focusing on IT context I went through many usage examples and definitions. My verdict is that analytics just means: the technology and the associated tools for data analysis.

If so, then ‘data analytics technology’ is a double redundant (or more accurately pleonasmic) term because analytics is a technology by itself and it’s clearly obvious that in IT context only data can be analyzed. Hence the above phrase can be abbreviated as ‘analytics’ without any impairment to the meaning. Same goes to ‘data analytics tools’. However, when IT context is not implied, something like  ‘data analytics software’ could be appropriate. In this case ‘data’ links it to IT and ‘software’ further narrows its meaning.

Incorrect usage (according to my interpretation):

Software company most probably doesn’t develop “next-gen data-analysis” but “next-gen data-analytics”.  And by the same token “cloud computing analysis” means examining cloud computing concept not using cloud computing as a tool for doing analysis. In latter case “cloud analytics” must be used.

Analyst performs in-database analysis or applies in-database analytics to calculate something. However analyst doesn’t performs in-database analytics.

If you look the terms used by QlikView folks you will find pretty much all the above terns used interchangeably, including the statement that they “provide fast, powerful and visual in-memory business analysis”. One may think that they provide business advise for companies in memory business. Terminology aside no bashing QlikView, it is excellent analytics software and one of very few that just works out of the box.

What is analytical?

In regard to data it means that it compiled using analysis. In regard to the tool it means that it is intended for analysis.

Data Analysis and Data Analytics

As already mentioned in IT context both are pleonasms and non-data analysis or non-data analytics are both oxymorons. So why stress data anyway? Mostly there is no reason and in other cases it is there to hint IT context. For example for bankers it is ‘financial analytics’ but for IT folks in the bank it is ‘data analytics’.

What ‘advanced analytics’ hints then?

Well, I guess it is a way for a vendor to indicate that their analytics is less stagnating than of their competitors :) Seriously tough, I guess it means, where it really used to mean anything that statistics methods are implemented like: predictive modeling and clustering. Also it has strong connotations with Gartner press-release naming it second most promising technology for 2010.

What is wrong with just sticking with older BI term?

It is a fashion thing I guess…. who said IT is boring? We could easily challenge Parisian fashion industry on that. Seriously tough, BI is considered as more comprehensive approach encompassing many aspect and is usually cross departmental, notorious for high project failure rate.  At least that way younger startups portrait it. On the other hand ‘data analytics’ is portrait something more simple and more of a ‘quick wins’ departmental solution. Something akin ‘Data mart’. And don’t ask me what is the difference with data marts. Have I mentioned fashion thing.

Well aside of fashion, there are more rational reasons too of course. Startup pitching BI, sounds boring at best with Microsoft, IBM, Oracle dominating it. It must define a new disruptive category and then dominate it. Who read Christiansen could remember that no new terms is necessary for disruption. Somehow it is easier to communicate using new terms. I would love to believe that it is not deceiving. In fact masquerading advanced analytics as something completely distinct may work all the way from investors to the customer’s CIO that may find suspicious that he is purchasing too many BI solutions, and purchasing first “advanced analytics” solution and early enough may seems quite smart and a sign that his organization is far from being in stagnation, especially just after reading Gartner press-release.


Another view on the subject:

Yet another one: