Two Envelopes Problem: Am I just dumb?

It seems the recent craze about statistician being a profession of choice in the future gains steam. In future where we will be surrounded by quality BigData, capable computers and bug-free open source software including OpenDremel. Well the last one I made up… but the rest seems to be the current situation. Acknowledging this I was checking what is the state of open-source statistics software and who are the guys behind it and etc.. But it is not my today topic, it is the topic of one of my next posts. Today I want to talk about one of the strangest problems/paradoxes on the internet I have ever seen. The story is, that I encountered right now “the two envelops problem” or “paradox” as some put it. Having worked with math guys for a long time (having math mastery that I’ll never reach in my life) I immediately recognized that problem as I was teased by it few times a long time a go. However, I was never told that this is such a big deal of a problem. Wikipedia lists it under “Unsolved problems in statistics”. Heh? And I never understood what is so paradoxical or hard or even interesting in it? For me it seemed high-school grade problem at most. So I put “two envelopes problem” into Google and found tens of blogs trying to explain it and propose over-engineered over-complicated and long solutions to such a simple problem. I have a very strange feeling that I’m either totally dumb or a genius and I know I’m not a genius ;) In some sources it is mentioned that only Bayesian subjectivists suffer from this, however in large majority of other sources it is presented as an universal problem… Well enough talking lets dive in into simplest solution on internet (or I will be embarrassed by someone pointing my mistake or similar solution published elsewhere).

The problem description for those who never heard about it:

You are approached by a guy that shows you two identical envelopes. Both envelopes contain money. You are allowed to pick and keep any one of them for yourself. After you pick one, the guy makes you an offer to swap envelopes. The question is if one should swap. For me it is as clear as sunny day that it doesn’t matter if you swap and it is easily provable by simple math. Somehow most folks (some Ph.D. level!) get into very hairy calculations that suggest that one should swap and than even more hairy ones why one should not. Some mention subjectivity but most don’t.

The simplest solution on internet (joking… but seriously I haven’t found such simple one):

Let’s denote the smaller sum in one of envelopes as X and therefore the larger sum in the other will be 2X. Then expected value of current envelope selection before swap consideration is 1.5X. How I got to it? Very simply we have 0.5 probability of holding the envelope with larger sum that is 2X and 0.5 probability with holding an envelope with smaller sum witch is X. So:

0.5*2X+0.5*X = X+0.5X=1.5X

So far so good… let’s now calculate the expected value if we swap. If we swap, we will have same 0.5 probability of holding larger sum and same 0.5 probability of holding the smaller sum. Needless to repeat the calculation, you will get exactly same 1.5X as an expected value, meaning that the swap doesn’t matter. Or if time has any value it doesn’t make sense to waste it by swapping envelopes.

Do you see it as hard problem? I bet 10-year old will do fine with it, especially if offered some reward.

How come others get lost here?

The answer is that some try to apply Bayesian subjectivism probability theory and then innocent folks follow it and gets lost as well.

If you look to Wikipedia article for example you will find a classic wrong solution that allegedly is “obvious” and then a link outside Wikipedia to a “correct” solution. The correct solution seems a long post with a lot of formulae and usage of Bayes theorem that at the end came to correct answer.

Well… I see clearly a flaw with the solution published in wikipedia. That solution really looks artificial, but according to the number of followers it should be obvious for many. The blunder is in the third line:

The other envelope may contain either 2A or A/2

By A they denoted the sum in the envelope they are holding. The mistake is in “either 2A or A/2″, it should be “either 2A or A”, Then everything will be ok and no “paradox” will emerge in the end. The mistake stems from the fallacy of using same name for two separate variables that are dependent but not equal! And then repeatedly confusing them since they have same name. Here is a “patch” to be applied to wikipedia published reasoning:

1. I denote by A the amount in my selected envelope. => FINE
2. The probability that A is the smaller amount is 1/2, and that it is the larger amount is also 1/2. => FINE
3. The other envelope may contain either 2A or A/2. =>INCORRECT variable A denotes different values so it is highly confusing to write it this way.

let’s explicitly consider two cases here instead of implicit “either..or…”

in first case let’s assume we are holding the smaller sum then the other envelope contains 2A
in the second case let’s assume we have holding the larger sum then the other envelope contains A/2. However, the A is different from A of first case so let’s write it as _A/2

Moreover we know that _A is not just different from A but is exactly twice the other so
_A = 2A
So the expression “either 2A or A/2″ must be written as “either 2A or _A/2″ or substituting _A=2A as “either 2A or A”.

Then for calculating expected value you also substitute A instead of A/2 and get same expected value than before swap.


That said, I saw many people feeling so “enlightened” by reading a complicated “correct” solution that they erroneously think and argue that one should not accept the following offer thinking it is equivalent to the above problem (well not exactly this but I rephrased it for clarity):

One guy comes to you and says there are three envelopes. You are allowed to pick one and keep it. One envelope is red and two are white. All three contain money. One of white envelopes contain twice as many as red one. Another white one contains half of red one. The white envelopes are identical and there is no way to know which one contains double and which one contains half. The question is which envelope you should choose: the red one or one of the white ones. And the answer is that you should pick one of white envelopes! In fact the calculation errorneously applied to the two-envelopes problem is 100% correct to the three-envelopes-problem. And on average you will win choosing one of white envelopes rather than a red one.

Debunking common misconceptions in SSD, particularly for analytics

1. SSD is NOT synonymous for flash memory.

First of all let’s settle on terms. SSD is best described as a concept of using semiconductor memory as disk. There is two common cases: DRAM-as-disk and flash-as-disk. And flash-memory is a semiconductor technology pretty similar to DRAM, just with slightly different set of trade-offs made.

Today there are little options to use flash memory in analytics beyond SSD. Nevertheless, it should not suggest that SSD is synonymous for flash memory. Flash memory can be used in products beyond SSD, and SSD can use non-flash technology, for example DRAM.

So the question is: do we have any option of using flash-memory in other form rather than flash-as-disk?

FusionIO is the only one and was always bold in claiming that their product are not-SSD but a totally new category product, called ioMemory. Usually I dismiss such claims automatically in subconscious as a common-practice of  term-obfuscation. However, in the case of FusionIO I found it to be a a rare exception and technically true. On hardware level there is no disk-related overhead in FusionIO solution and in my opinion FusionIO are closest to the flash-on-motherboard vision among all the rest of SSD manufacturers. That said, FusionIO succumbed to implementing a disk-oriented storage layer in software because unavailability of any other standards covering  flash-as-flash concept.

You can find a more in-depth coverage of New-Dynasty SSD versus Legacy SSD issue in recent article of Zsolt Kerekes on Albeit I’m not 100% agree with his categorization.

2. SSD DOESN’T provide more throughput than HDD.

The bragging about performance density of SSD could safely be dismissed. There is no problem in stacking up HDDs together. As many as 48 of them can be put in single server 4U box providing aggregate throughput of 4GB/sec for fraction of SSD price. Same goes to power, vibration, noise and etc… The extent to which this properties are superior to disk is uninteresting and not justifying the associated premium in cost.

Further, for any amount of  money, HDD can provide significantly more IO throughput , than SSD of any of today vendor. On any workload: read, write or combined.  Not only this, but it will do so with an order of magnitude more capacity for your big data as additional bonus. However, a few nuances are to be considered:

  • If  data is accessed in random small chunks (let’s say 16KB chunks), then SSD will provide significantly more throughput (factor of x100 may be) than disk will do. Reading in chunks at least 1MB will put HDD as a winner in the throughput game again.
  • The flash memory itself, has great potential to provide an order of magnitude more throughput than disks. Mechanical “gramophone” technology of disks cannot compete in agility with the electrons. However, this potential throughput is hopelessly being left unexploited by the nowadays SSD controller. How bad it is? Pretty bad, SSD controllers pass on less than 10% of potential throughput. The reasons include: flash-management complexity, cost-constraints leading to small embedded DRAM buffers and computationally-weak controllers,  and the main reason being that there is no standards for 100 faster disk, neither legacy software could potentially keep with higher multi-gigabyte throughputs, so SSD vendors don’t bother and are obsessed with the laughable idea of bankrupting HDD manufacturers calling the technology disruptive which it is not by definition. So we have a much more expensive disk replacement that is only barely more performant, throughput-wise, than vanilla low-cost HDD array.

3. Array of SSDs DOESN’T provide larger number of useful IOPS than arrays of disks.

While it is true that one SSD can match disk array easily in IOPS, it should not suggest that array of SSD will provide larger number of useful IOPS. The reason is prosaic, array of disks provides an abundance of IOPS, many times more than enough for any analytic application. So any additional IOPS are not needed and astronomical number of IOPS in SSD arrays is a solution looking for a problem in analytics industry.

4. SSD are NOT disruptive to disks.

Well if it is true it is not according to Clayton Christiansen definition of “disruptiveness”.  As far as I remember Christiansen defines “disruptiveness” as technology A being disruptive to technology B when all following holds true:

  • A is worse than technology B in quality and features
  • A is cheaper than technology B
  • A is affordable to a large number of new users to whom technology B is appealing but too costly.

SSD-to-disk pair is clearly not true for any condition above so I’m puzzled how one can call it disruptive to disks?

Again. I’m not claiming that SSD or flash-memory is not disruptive to any technology I just claiming that SSD are not disruptive to HDD. In fact, I think flash-memory IS disruptive to DRAM. All three conditions above hold for flash-to-DRAM pair. Also a pair of directly attached SSDs are highly disruptive to SAN.


Make no mistake I’m a true believer in flash-memory as a game-changer for analytics just not in the form of disk replacement. I’ll explore in my upcoming posts the ideas where flash memory can make a change. I know I totally missed any quantification proofs for all the claims above but…. well…. let’s leave it for comment section.

Also one of best coverage of flash-memory for analytics (and not coming from a flash vendor) is of Curt Monash on DBMS2 blog:

Google Percolator: MapReduce Demise?

Here is my early thoughts after quickly looking into  Google Percolator and skimming the paper .

Major take-away: massive transactional mutating of tens-petabyte-scale dataset on thousands-node cluster is possible!

MapReduce is still useful for distributed sorts of big-data and few other things, nevertheless it’s “karma” has suffered a blow. Beforehand you could end any MapReduce dispute by saying “well… it works for Google”, however, nowadays before you say it you would hear “well…. it didn’t work for Google”. MapReduce is particularly criticized by having 1) too long latency, 2)too wasteful, requiring full rework of the whole tens-of-PB-scale dataset even if only a fraction of it had been changed and 3) inability to support kinda-real-time data processing (meaning processing documents as they are crawled and updating index appropriately). In short: welcome to disillusionment stage of MapReduce saga. And luckily Hadoop is not only MapReduce, I’m convinced Hadoop will thrive and flourish beyond MapReduce and MapReduce being an important big data tool will be widely used where it really makes sense rather than misused or abused in various ways. Aster Data and remaining MPP startups can relax on the issue a bit.

Probably a topic for another post, but I think MapReduce is best leveraged as ETL tool.

See also for another view on the issue. There are few others posts already published  on Precolator but I haven’t yet looked into them.

I’m very happy about my SVLC-hypothesis, I think I knew it for a long time, but somehow only now, after I have put it on paper, I felt that the reasoning about different analytics approaches became easier. It is like having a map instead of visualizing it. So where is Percolator in the context of SVLC? If it is still considered analytics, Percolator is an SVC system – giving up latency for everything else, albeit to a lot lesser degree than its successor MapReduce. That said Percolator has a sizable part that is not analytics anymore but rather transaction processing. And transaction processing  is not usefully modeled by my SVLC-hypothesis. In summary: Percolator is essentially the trade-off as MapReduce – sacrificing latency for volume-cost-sophistication but more temperate, more rounded,  less-radical.

Unfortunately I haven’t enough time to enjoy the paper as it should be enjoyed with easy weekend-style reading. So inaccuracies may have been infiltrated in:

  • Percolator is big-data ACID-compliant transaction-processing non-relational DBMS.
  • Percolator fits most NoSQL definitions and therefore it is a NoSQL.
  • Percolator  continuously mutates dataset (called data corpus in the paper) with full transactional semantics and in the sizes of tens of petabytes on thousands of nodes.
  • Percolator uses a message-queue style approach for processing crawled data. Meaning, it processes the crawled pages continuously as they arrive updating the index database transactionaly.
  • BEFORE Percolator: Indexing was done in stages taking weeks. All crawled data was accumulated and staged first, then pass-after-pass transformed into index. 100-passes were quoted in the paper as I remember. When the cycle was completed a new one was initiated. Few weeks latency after content was published and before it appeared in Google search results were considered too long in twitter age, so Google implemented some shortcuts allowing preliminary results to show in search before the cycle is completed.
  • Percolator  doesn’t have declarative query language.
  • No joins.
  • Self-stated ~3% single node efficiency relative to the state-of-the-art DBMS system on single node. That’s the price for handling (which is transactional mutating) high-volume dataset… and relatively cheaply. Kudos for Google to being so open on this and not exercising in term-obfuscation. On the other hand, they can afford it… they don’t have to sell it tomorrow on rather competitive NoSQL market ;)
  • Thread-per-transaction model. Heavily threaded many-core servers as I understand.

Architecturally reminds me MoM (Message Oriented Middleware) with transactional queues and guarantied delivery.

Definitely to be continued…

other Percolator blog posts:

CAP equivalent for analytics?

CAP theorem deals with trade-off in transactional system. It doesn’t need an introduction, unless of course you have been busy on the moon for last couple of years. In this case you can easily Google for good intros. Here is a wikipedia entry on the subject.

I was thinking how would I build an ideal analytics system. Quickly came realization that all “care abouts” cannot be satisfied simultaneously, even assuming enough time for development. Some desirable properties must be sacrificed in favor for others, hence architectural trade-offs are unavoidable in principle. I immediately had déjà vu regarding CAP. So the following is my take on the subject:

SVLC hypothesis regarding architectural trade-offs in analytics

I haven’t came to rigorous definition yet, here is an intuitive one:  Current technology doesn’t allow implementation of a single analytics system that is SVLC which is simultaneously sophisticated, high-volume, low-latency and low-cost .One of these four properties must be sacrificed, the extent to which it is sacrificied determines the extend to which other properties could potentially be implemented.

Deep dive for the brave souls

Let’s reiterate the desired system properties first (see ideal analytics system):

  1. Deep Sophistication => …free-form SQL:2008 with multi-way joins of 2 and more big tables, sorts of big tables and all the rest of data heavy lifting.
  2. High Volume => …handling big data volumes, Let’s cap it 1PB meanwhile for easier thinking.
  3. Low Latency => …subsecond response time for the query on average. A more concrete description is that latency must be low enough to allow analyst working interactively in conversational manner with the system.
  4. Low-Cost => … I’ll define it as commodity hardware and software must not exceed hardware costs. More rigorously? $1/GB/month for actively queried data is my very rough estimation for low-cost.
  5. Multi-form => any data, relational, serialized objects, text etc….
  6. Security => can speak for itself

I found that multi-formness and security doesn’t interfere with implementing the rest of properties and can in principal always be implemented in satisfactory way without major compromises. Some nuances exists tough, but I’ll ignore them for clearness. So removing them and getting the following list:

1. Sophistication (deep)              => S

2. Volume (high)                          => V

3. Latency (low)                           => L

4. Cost (low)                                  => C

These four are highly inter-related and form а constraint system . Implementing one to full extent hampers the rest. Let’s see what trade-offs we have here. Four properties that is 6 potential simple 2-extremes trade-offs. Let’s settle on geometric tetrahedron to model the architectural trade-off space. Four properties correspond to four vertexes and six trade-offs correspond to six edges. Then we model particular trade-off by putting a point on the corresponding edge. So we get something like this:

Okey, so far so good. Now, I’ll try to be а devil advocate and challenge my point that any trade-offs are necessary in the first place. So let’s review the system denoted as

SVLC=> high-volume, low-latency, deep analytics, low cost

Because it is low-latency it will need I/O throughput adequate to scan whole dataset quickly and since it is high-volume (see above for quantitative definition) meaning aforementioned dataset is big, it will need a large number of individual nodes in cluster to provide the required aggregated adequate I/O throughput. The number of machines is further increased with low-cost requirement meaning that simpler servers that are in mainstream sweetspot must be purchased. Therefore system becomes extremely distributed and data being dispersed all over it. The low-cost networking usually mean TCP/IP that is high-overhead, high-latency and low-throughput. Deep Sophistication analytics requires performing complex data-intensive operations like full sorting of big datasets, joining big tables or just simple select distinct over big data that will inevitably have long latencies. Once latency is long enough that probability of node failing mid-query become non-neglectable. Latency increase becomes self-perpetuating because of required finer grain of  intermediate result materialization. This is needed to prevent never-ending query restarting and provide a kind of resumable queries. Not other solutions to resumable queries are documented except MapReduce-style intermediate result materialization. This ultimately makes latency batch-class long violating low-latency requirement.

I guess my proof miss the required rigor to be considered seriously by academics, I’m just an engineer :) I love to see it reworked to something more serious tough. I just hope to get the point across and to be of value to engineers and practicing architects.

Anyhow this is the base of my hypothesis showing that it impossible to achieve full SVLC using today technology.

Let’s consider other cases where we give up something. It is easy to visualize such trade-off as a 2D plane dissecting tetrahedron. The three points were three edges are cut corresponds to three trade-offs. For simplicity I’ll elaborate only radical trade-offs in this post. Radical trade-offs are those were on all six trade-off edges one extreme is selected and this corresponds to putting a trade-off point on one of vertices. Most real-world system make temperate trade-offs that corresponds to the plane that dissects the tetrahedron into sizable parts. Moreover real-world system, especially available from commercial vendors, are a toolbox kind of a system. Meaning that system consists of a set of tools where each one makes a different set of trade-offs. Then it is up to engineer to choose the right tool for the job to the toolbox. However, toolbox approach is not a loophole for this hypothesis, because properties of the different tools don’t add up in desired way. For example the simultaneous use of expensive tool and low-cost tool is still expensive; the simultaneous use of low-latency and high-latency is high-latency. Nevertheless, toolbox approach is best one for real-world problems. Because real-world problems are usually decomposed to a number of sub problems where each may require different tool.

Well…back to the radical systems… Let’s consider all four cases where we completely give up one property to max out the rest three:

SVL => high-volume, low-latency, deep analytics …giving up Cost… seems to be implementable. In its pure form it reminds classic national security style system. Subsecond querying petabyte-scale dataset with arbitrary joins. Heavily over provisioned Netezza / Exadata / Greenplum / Aster and other MPP-system could do it I believe. Data kept in huge RAM or on flash, huge I/O is available to scan the whole dataset in matter of seconds. High-speed, low-overhead networking is available to with huge bi-section network bandwidth capable to shuffle the whole dataset in matter of seconds. Infiniband/RDMA are the best probably. How bad Cost can be here? Well… unhealthy to imagine. Throw some numbers in anyway? Will do some back of envelope calculation in my future posts.

SVC => high-volume, low-cost, deep analytics …giving up Latency… seems to be implementable, in fact it is MapReduce territory, Hadoop natural habitat. Are ETL systems SVC? I think no, because while they given up Latency they haven’t kept on Volume. How bad is Latency? well… forget interactivity, create queuing system and get notified when the job is done. If too slow add servers. If some interactive experimentation is needed, use VLC first to develop and prove your hypothesis and only than crunch the data with SVC. Since cost is involved I guess Hadoop MapReduce is really a king here. Tough if Aster licenses for example are comparable to commodity cluster overall cost and is not many multiples of it, then it could fit the category nicely. Otherwise it will make suboptimal (considering my model context not in wider sense!) great SV system. The great MapReduce debate is not for nothing!

SLC => low-latency, low-cost, deep analytics …seems to be implementable in a minute, just start your favorite spreadsheet application ;) You will be shocked how much data Excel crunches in just few seconds, nowadays. Most traditional BI tools are in this category too. Heck, if not for BigData, the analytics industry will be as would become as exciting as enterprise payroll systems. Though, innovation is possible even there.  Heck, 99% of BI is fully feasible to be done completely  in-memory, often on single server and the deployment must be really low-risk low-cost very-rapid if done correctly. Most cloud BI vendors are also in this category. “R-project” is here too. This was Kickfire beloved spot as well as is now for QlikTech & GoofData, PivotLink and etc… So pretty much all BI vendors are here except MPP heavy-lifters. How bad Volumes are limited? Well with CPU-DRAM bandwidth being 50GB/sec and DRAM sizes 64GB on common commodity servers I think crunching few tens GB should be well possible in matter of seconds, if not for implementation sloppiness, and with literally pocket money (average enterprise’s pocket not mine… yet).

VLC => high-volume, low-latency, low-cost …giving up Sophistication…seems to be implementable, that is doing a simple scan and giving up the Sophistication, particularly joins. Dremel and BigQuery seem following this approach. How bad is giving up Sophistication? Well, it all depends on how pre-joined/nested dataset is. With normalized schemas, well… unavailability of joins makes it pretty much impractical implement any usable analytics. However, with star-schema and particularly nested data (with some extensive pre-joins even if it means some redundancy), this can work wonders to vast majority of queries, completing them in seconds on even large datasets. However, no pre-join strategy will work for 100% of queries and functions like COUNT DISTINCT must be approximated when run over big dataset like described in Dremel paper. Also I’ll assign sampling strategy to this category, because sophistication also means accuracy here. One clarification: only joins of several big tables are sacrificed here, joins of big table with even large number of small tables are perfectly okay and done on the fly during the scan. Sorts of big table before it was reduced significantly to manageable size is also sacrificed in this approach, however approximation algorithms can be used for this and then it will be okay too.

Hence the conclusion: only 3 of 4 SVLC properties can be implemented in full extend in single analytic system. The hypothesis goes that any attempt that allegedly violates it, in fact either is no a single system or impairment is latent in one or more properties.

[TODO: rewrite] The extended hypothesis for fractional cases:

  • Systems/trade-offs may be radical or temperate. Radical trade-off completely gives up one of four properties of the system. Temperate trade-off gives-up the property only fractionally on expense of giving-up other properties also fractionally.
  • Most real-world systems are complex. They are a set of tools, where each separate tool is a concrete trade-off. Then the user of such system can use different tools with different trade-off sequentially or simultaneously. This may seems as way out of the restraint; however it is not, because properties of separate tools don’t add up in desired way. For example the simultaneous use of expensive tool and low-cost tool is still expensive; the simultaneous use of low-latency and high-latency is high-latency.  Nevertheless, toolbox approach is best one for real-world problems. Because real-world problems are usually decomposed to a number of sub problems where each may require different tool.
  • Most often trade-offs of real-world systems are temperate.

Analytics Patterns

Unsatisfied by my previous post‘s Advanced Analytics definition and giving it a thought of what is advanced methods in analytics I realized that analytics industry miss a good analytics pattern catalog. A list of common problems followed by a list of common industry-consensus solutions to them. An equivalent of GoF design patterns to analytics. The list, where each list item starts from brief description of common recurring analytics problem and follows by elaboration by commonly accepted solutions to this problem followed by mandatory example section illustrating the solution using widely available tools.

Software engineers stolen this idea from the real architects (those dealing with a concrete structures not an abstract ones ;) ) 15 years ago.  They haven’t avoided initial short period of mass obsession and abuse of the concept… who does?  But eventually it worked out quite well for them us. I wonder if analytics industry could leverage these experience and create a catalog of some 25-50 most common patterns. Pattern descriptions in a catalog not to exceed few pages and number of patterns limited to few tens, making it wide industry adoption feasible.

What you think? Any ideas? I’ll try to make a first step by dumping patterns from my head right now (it is by no way a finished work):

I’ll call it analytics patterns:

1. Predictive Analytics. That was the easiest for me. I was involved into it for the first time some 12 years ago and developing what is now The system was used mostly to forecast sales taking into account an array of causal factors like seasonality, marketing campaigns, historical growth rates and etc. The problem is that there is a lot of time-based historic data available and it is required to forecast future values in the context of given historic data. The basic mechanism of implementing Predictive Analytics is to find or less preferably to develop a suitable mathematical model that can model closely (but be  cautious about overfitting) existing data, usually a time-series data and then use the model to induce forecasted values. In simple terms it is a case of extrapolation. Correct me if I’m wrong. As it was the case in 90-ties I’m pretty sure it is the case now, that exotic hardcore AI approaches like neural networks & genetic programming are best kept exclusively for moonlighting experiments and as material for cooler conversation the next morning. With deadlines defined and limited budget it is best to stick to proven techniques to achieve quick wins. I think the value of working forecasting is self evident.

2. Clustering. Well not the heavy noisy one in a cold hall :) but the statistics sub-discipline called better cluster-analysis. The problem here is that a lot of high-dimensionality data is available and it is required to discover groups with similar observations in other words automatically classify them. It is implemented by searching for correlations grouping the records according to the discovered correlations. What it is good for? Well in simple terms it helps to discriminate different kinds of objects and observe the specific properties of each kind. Without such grouping, one would be able only to observe properties that all objects exhibit or alternatively go object by object and observe it in isolation.

3. Risk Analysis - particularly through Monte-Carlo simulation. It is not called Monte-Carlo because it is invented there :) it called so because of reliance on random numbers akin Monte-Carlo casinos. Random numbers are proved most effective way to simulate mathematical model with large number of free-variables. With advent of computers it became a whole lot easier than using the book.

4. Given telecom event stream, run events through the rules engine to detect and prevent telecom fraud in real-time. This is essentially CEP engine and usually implemented by creating a state-machine per rule and running the events through it. Special version of stream sql is used. Similar scheme can be used for real-time click fraud prevention.
5. Given serialized object data or nested data allow running ad-hoc interactive queries over it in BigQuery fashion.
6. Given normalized relational model, allow running any ad-hoc queries. For common joins create a materialized view to speed up joins.
7. Canned reports. I guess they are good also for some cases…….
8. OLAP/Star schema when to use? ……

What else?

Of course it is just a first step and to do it correctly it will be a project in itself, in form of a book most probably. However, as one Chinese proverb  goes “A journey of a thousand miles begins with a single step”.

Feature list of ultimate BigData analytics

  • Volume Scalability => the solution must handle high volumes of data, meaning the cost must scale linearly in the range of 10GB – 10PB.
  • Latency Scalability => the solution must be interactive or batch, and cost must scale linearly in the range of 1 msec – 1 week.
  • Sophistication Scalability => the solution must support simple summing scans or complex multi-way joins and statistics functionality and the cost must scale linearly in the range of simplistic scans to full blown SQL:2008/MDX/imperative in-database-analytics/MapReduce. Report/index viewing is not considered as analytics at all and particularly as not low-sophistication analytics. Report/index creation is analytics and can be of varied sophistication degree. ETL systems is considered as independent analytic systems.
  • Security => any unauthorized access to data must be prevented and in the same time, in-place data analysis (like predicate evaluation) must be possible and resource-efficient.
    • Keeping data always encryption and keeping keys always on client will not work. It will require shipping all the data to the client and is non-starter for big data analytics. So compromises must be made. The issue is especially contentious in public cloud setting.
    • If data is stored encrypted and is continuously decrypted in-place for predicate evaluation, for example, it means that keys must kept in same place (at least temporarily) and it compromises the whole scheme altogether, flooring its cost-benefit factor. The cost of decryption is pretty high.
    • De-identification of all fields may work; random scaling may be applied to numeric fields with subsequent query/result rewrite.
    • Security-by-obscurity methods and defense-in-depth approach may have good cost-benefit factors matching or exceeding overall security for in-house approach.
  • Cost => must have low-TCO that scales linearly to dataset size and the load factor caused by submitted queries. The breakdown (assuming cloud):
    • Storage component linear to dataset size. Economies of scale must bring this cost down significantly. Eventually it must be cheaper than on-site storage.
    • Computing component linear to load with infinite intra-query automatic elasticity. Guarantied elasticity may bear a fixed premium proportional to guarantied capacity. Minor failures of cloud component must not restart long running queries.
    • Bandwidth component. Fedexing hard-drives are by far the cheapest way to upload data, and then query results are really small. How much information human can comprehend instantly after all?
  • Multi-form =>
    • normalized relational
    • star-schema
    • cubes
    • serialized objects / nested data.
    • text
    • media
    • spatial
    • bio / scientific
    • topographical
    • and other data forms must be equally well supported and cross-queried.

Terminology: Analysis vs. analytics and more…

I see a lot of confusion in the usage of newer terms in analytics. I do confuse them myself occasionally. I find it funny that the industry as serious as analytics tolerates constant renewal of its basic terminology. Yet, I confess, I’m very guilty of it myself. I do enjoy the freshness and the novelty of newer terms even being fully aware that is fake by a large extent.

In this post I’ll take a step to clear the confusion on few most basic terms: analysis vs. analytics vs. BI and all their common derivatives.

The Spoiler (the quick answer):

Analysis is the examination process itself where analytics is the supporting technology and associated tools. BI is quite synonymous to analytics in IT context. Advanced Analytics, Business Analytics, Data Analytics, Analytics Software, Analytics Technology are almost always marketing pleonasms (redundant expressions) and can be safely substituted by just ‘analytics’. ‘Data analysis’ is yet another pleonasm. Compound expressions of these words such as ‘BI Analytic Technology’ are yet again pleonasms albeit of higher degrees. Some nuances exist tough and are elaborated in this post.

The deep dive for the brave souls:

Let’s attempt to properly define the terms and then carefully examine the alleged differences.

Before we dive in, a word of caution: definition by synonyms is wrong. It causes stack overflow in the mind of programmers. For example “analysis” => “critical examination” => “examination” => “critical inspection” => “inspection” = “critical examination” => “f…”=> “why I just don’t make myself a cup of coffee?”.

You can check what makes a good definition and common mistakes following……. Well apparently I haven’t found in a quick look a good material  on proper definition but for fallacies there is a nice wikipedia article. If you find a good article on what makes a good definition drop me a note / comment, if so it would include a definition definition.

Let’s start….

What is analysis?

Analysis is a pretty old, well understood term and essentially means “breaking down” or “decomposition”. More accurately – “the process of decomposing complex entity into simpler components for easier comprehension of aforementioned entity”. As a child I did a lot of it to the toys and electronic appliances around me. I challenge you to find a better and more concise definition than mine above (it is a matter of taste but anyway). Here is some links to save you time:

What is analytics?

Analytics is a newer term related to analysis and looking it up will usually only add to confusion since definitions vary and are fuzzy and seems to be context-dependent. Focusing on IT context I went through many usage examples and definitions. My verdict is that analytics just means: the technology and the associated tools for data analysis.

If so, then ‘data analytics technology’ is a double redundant (or more accurately pleonasmic) term because analytics is a technology by itself and it’s clearly obvious that in IT context only data can be analyzed. Hence the above phrase can be abbreviated as ‘analytics’ without any impairment to the meaning. Same goes to ‘data analytics tools’. However, when IT context is not implied, something like  ‘data analytics software’ could be appropriate. In this case ‘data’ links it to IT and ‘software’ further narrows its meaning.

Incorrect usage (according to my interpretation):

Software company most probably doesn’t develop “next-gen data-analysis” but “next-gen data-analytics”.  And by the same token “cloud computing analysis” means examining cloud computing concept not using cloud computing as a tool for doing analysis. In latter case “cloud analytics” must be used.

Analyst performs in-database analysis or applies in-database analytics to calculate something. However analyst doesn’t performs in-database analytics.

If you look the terms used by QlikView folks you will find pretty much all the above terns used interchangeably, including the statement that they “provide fast, powerful and visual in-memory business analysis”. One may think that they provide business advise for companies in memory business. Terminology aside no bashing QlikView, it is excellent analytics software and one of very few that just works out of the box.

What is analytical?

In regard to data it means that it compiled using analysis. In regard to the tool it means that it is intended for analysis.

Data Analysis and Data Analytics

As already mentioned in IT context both are pleonasms and non-data analysis or non-data analytics are both oxymorons. So why stress data anyway? Mostly there is no reason and in other cases it is there to hint IT context. For example for bankers it is ‘financial analytics’ but for IT folks in the bank it is ‘data analytics’.

What ‘advanced analytics’ hints then?

Well, I guess it is a way for a vendor to indicate that their analytics is less stagnating than of their competitors :) Seriously tough, I guess it means, where it really used to mean anything that statistics methods are implemented like: predictive modeling and clustering. Also it has strong connotations with Gartner press-release naming it second most promising technology for 2010.

What is wrong with just sticking with older BI term?

It is a fashion thing I guess…. who said IT is boring? We could easily challenge Parisian fashion industry on that. Seriously tough, BI is considered as more comprehensive approach encompassing many aspect and is usually cross departmental, notorious for high project failure rate.  At least that way younger startups portrait it. On the other hand ‘data analytics’ is portrait something more simple and more of a ‘quick wins’ departmental solution. Something akin ‘Data mart’. And don’t ask me what is the difference with data marts. Have I mentioned fashion thing.

Well aside of fashion, there are more rational reasons too of course. Startup pitching BI, sounds boring at best with Microsoft, IBM, Oracle dominating it. It must define a new disruptive category and then dominate it. Who read Christiansen could remember that no new terms is necessary for disruption. Somehow it is easier to communicate using new terms. I would love to believe that it is not deceiving. In fact masquerading advanced analytics as something completely distinct may work all the way from investors to the customer’s CIO that may find suspicious that he is purchasing too many BI solutions, and purchasing first “advanced analytics” solution and early enough may seems quite smart and a sign that his organization is far from being in stagnation, especially just after reading Gartner press-release.


Another view on the subject:

Yet another one: