Analyze Business Objects Without Constraints

Rows? Columns? Tables? ... “Object Analytics” means analyzing objects as a whole, without forcing data into constraint analytical schemas.

Enable Next Generation Intelligent Algorithms

Are you tired of using SQL for analytics? With its object-centric NoSQL database, Xplain opens up a whole new world for data scientists and algorithm developers.

Discover Big and  Complex Data

What is the value of ever more data if you are forced to simplifications prior to analysis? Xplain allows to analyze billions of complex objects holistically – instead of keyhole views for each new business question.

Learn what Drives Your Targets

Just point at a problem and get it “xplained”. With all details available, you run predictive algorithms in an instant – and thus uncover meaningful factors that drive risks and outcomes.


Your business is focused around a specific object

– it might be the “customer”, the “patient”, or a technical device, like the “machine” or the “car”. No matter what the central focus of your business is, understanding this object holistically is vital to your business’ success. Holistically means with “all information available” ...

... that, however, is exactly where todays databases and analytical technologies fail. Databases split complex objects into atomic parts and store different parts in different tables. Once scattered across many tables, it is extremely difficult to analyze the object “as a whole”.

With its “object-centric” representation, Xplain enables a radically different view of your data. Database vendors advertise the benefits of their specific row- or column-store. Being in charge of a business, however, you don't want to analyze "rows" – you need to analyze a certain business object. An object-store therefore is the most appropriate representation of your data.

It offers un-precedented opportunities

  • for the developer – no cumbersome joins and non-performing SQL,
  • for the data scientist – build predictive models and novel algorithms without expensive pre-processing and flattening data,
  • and for the business user – you don’t need to ask your expert to shoot yet another keyhole image of you business object – you will be analyzing the object as an entity.

The object-store offers not just novel analytical means. Because objects are readily available in all detail, you run an analysis on billions of them in an instant.

“Object Analytics” is a game changer in Big Data Analytics.


Why not use databases for object analytics?

Many analytic scenarios focus on a certain object. In today's data-rich world that means not only that millions of these objects are stored, but they are usually grouped into complex structures and sub-structures. The “patient”, for example, does not only have a set of diagnosis and prescribed drugs over time, but also hospital visits. Each visit in turn is a series of events like diagnostic procedures, lab information, and various therapies. Soon genetic and ohmic data will add yet another level of complexity to the picture.

This complexity is in painful contrast to today’s analytical possibilities. You may choose specific analytical technologies that provide high performance. But those technologies force you to cast your data into constraint schemas and thereby simplify your object of analysis. Or, you stick to the complex relational model and do the analysis using complex SQL queries. Many of you have already experienced the pain this creates. SQL is not meant for complex analytical algorithms, and even if you manage to develop algorithms in SQL you often experience disappointing performance.

It’s not that a relational database is a bad thing. Indeed, it cannot be beaten for what it's built for – consistently managing complex data. This requires splitting an object into atomic parts and storing different elements in assorted tables. Once distributed across many tables, however, an object is hard to analyze “as a whole” – and that is exactly where today's databases and analytical technologies fall short.

What is different with the Xplain technology?

Xplain organizes data in a different, “object-centric” way and provides access to objects as a whole. You can define operations on those objects and iterate over them, just like you previously iterated over rows in tables, but without the time-consuming JOINs. To really reap from the benefits, however, you can use the map-reduce interface to define an operation on an object and execute it massively parallel across millions of stored instances.

In most cases we are not replacing the database as the primary store. It does a perfect job managing data. Within your database you just need to choose an object to focus your analysis on. The data is then quickly re-organized in an object-centric format with no loss of detail, and within minutes you are offered a holistic view of your data with the chosen object in focus.

Algorithms previously painful to implement in SQL are now easy to apply. Novel algorithms – previously simply unimaginable – are becoming feasible (see, for example, the predictive models described below).

Which new opportunities does this create?

Existing analytical technologies require to “simplify” data into constraint formats. Standard BI-technologies, for example, require to represent data in terms of a “cube” or a star schema (basically a flat, annotated table). Predictive models are another example: With existing approaches you first need to process complex data and force them into a “flat” analytics table before the model itself can be built. This is not just tedious and expensive to do, but in particular requires making biasing assumption.

No expensive preprocessing

We eliminate this step and directly operate on objects with all the details included. ”Object Analytics” may be seen as extending existing BI-concept from flat artificial data to complex real-world data. And a predictive model implementation based on our technology right immediately operates on objects with all details instead of rows in a flat table as a simplified representation of the original object.

No experts and/or assumptions

With the full data model available you don't need to guess which parts of the data is important for your analysis or contribute to your predictive model. It's “xplained” to you based on all available data. This in particular means unbiased discovery of the network of potential “cause and effect” relationships driving a target – at the push of a button, as the algorithm performs in seconds on million of object with billions of detailed data.

Faster turn-around times

Without the preprocessing and the time-consuming creation of queries you can bring analytics from the ivory tower into your daily business. Data that was gathered and manually processed to explain deviations in crisis mode can now be used day-to-day thanks to the lowered development efforts. There is no need for repeated processing data into different constraint views for each emerging question. The business user may interactively follow his train of thoughts from questions to follow-up questions and – supported by predictive models – discover potential “cause and effect” relationships driving his targets.

Vision & Mission

Our Mission: “We help early adopter customers which are challenged with complex analytics. We build applications for them aiming to propel them to the innovative forefront in their domain.”

Algorithms previously painful to implement are becoming easy to do based on our object representation. “Object Analytics”, however, is a major change because algorithms formerly just unthinkable are now becoming feasible. For centuries statisticians developed theories based on “rows in tables”. A “row”, however, is a simplified and poor representation for a complex object. Working with objects instead of rows opens up a whole new world for data scientists and algorithm developers.

Our Vision: “We imagine algorithms that process any complex data as it is – and live in a real world instead of an artificially prepared analytics environment.”

Existing analytical technologies require to “simplify” data and – as a result – cannot immediately deal with real word data. A data expert first needs to pre-structure data (see Technology section). This not just requires repeated manual effort. The worse part is that for a simplified schema you need to make assumption on what might be important – and therefore hardly discover new knowledge.

Imagine algorithms that can process data of any complexity – they just dig through data with no need for human experts to manually pre-structure and simplify data. Not just that analytical projects are becoming much more agile and the end user much more empowered ...

... soon – with no need for human experts to pre-structure data – intelligent agents based on predictive algorithms will autonomously be digging through complex and constantly changing data environments. With that, the vision of autonomous intelligent modules is becoming a much closer reality.

Object Analytics will propel the field of Artificial Intelligence into novel orbits.



You trust in your physician because he or she knows you and your health history. Doctors make decisions on new events in the context of everything that has previously happened. Single pieces of information are rarely meaningful, and more than in any other domain, healthcare professionals need the ability to analyze their "object" – the patient – as a whole.

Consider rules in a claims management system or more demanding evidence based algorithms for care management: whether or not a drug or treatment is refundable or recommendable depends on previous diagnoses, prescriptions, age or general health status and pre-disposition. In other words on the patient as a whole. With Xplain the object “patient” is at your fingertips with all of its details, and in a second you can compare one patient object to millions of others. Algorithms with the patient in focus become simple to implement, where in the past you had to struggle with complex and inefficient SQL.

Life Science

Diagnoses, procedures, prescriptions, drug information, and increasingly also genetic and omic information and public research data – all this information from different sources forms a complex data schema. Only the ability to correlate these different streams will reveal new knowledge, like driving factors for risks and success rates of treatments, as a basis for better care and individualized medicine.

Xplain allows to combine different data sources into one holistic picture of the target and implement new algorithms, which in the past would have just been unimaginable. Predictive models (see the Technology section) are a particular relevant example for Life Science. Without prior assumptions the algorithm builds relevant predictive variables immediately from a complex data schema, providing you an un-biased list of dependencies. Statisticians well know that missing information leads to flawed conclusions on “causal” effects – and this also holds if you can analyze just parts of the whole at a time. Xplain’s holistic Object Analytics therefore constitutes a novel opportunity, in particular to uncover functional dependencies and meaningful biomarkers.


A Formula One racing car has it: Sensors everywhere, and a team of experts analyzing data to optimize the car and give the driver valuable feedback. Your car most likely also has a variety of sensors already. And additional amounts of data is collected permanently: diagnostic error codes, information on failures of components, maintenance information, and the manufacturer certainly also knows what components are built into the car, which supplier has shipped them and where pieces have been assembled.

“Object Analytics” in that case means combining all this data in terms of the object “car”. Imagine the possibilities! You may easily cross-correlate any of that different data streams or build a predictive model using all that wealth of information to predict failures and – more importantly – understand what drives failures. You could make your service stops when they are actually necessary and not based on time or mileage. The technician would maintain and exchange exactly the parts that are about to wear out, and your next car will be even more reliable due to the combined fleet data that the manufacturer collects.

Cars are only one example. With Industry 4.0 and the Internet of Things, more and more machines collect data, and there are diverse data streams connected to the object of analysis. Exactly those complex data networks are where our Object Analytics model becomes indispensable to make the above scenarios real.


Experience “Object Analytics”

The quickest way to get there is using it as a service:

  1. Upload your data to a secure cloud environment (e.g. AWS or Digital Ocean).
  2. Data is turned into the Object Analytics format.
  3. Access data in terms of a web-based frontend which implements the novel Object Analytics paradigm. (There is no need to install anything – all you need is an up-to-date browser.)

Our data scientists will initially assist you discovering your data so that you get up to speed quickly and experience the full capabilities of the Object Analytics approach. This in particular includes building predictive models and exploring dependencies which drive a predictive target resp. which drive your business goals.

Have you tried to do a little more demanding analysis with SQL? And you felt like trying to hammer a nail into a wall with a shovel? Then, here is the hammer you have been waiting for: As a part of a Proof-of-Concept we might also show you how to easily develop some example algorithms where you formerly have been struggling with SQL.

Once you have got a first taste we are sure that your don’t want to miss that any more. You may continue to use the technology as a cloud-based service and upload new data on a regular basis . Or you may eventually decide for an on premise installation.

Xplain is the only company which reasonably can offer analytics as a service! Why? Because we can analyze data “as they are”! We do not need to repeatedly engage experts to manipulating data for each different question the business user is going to ask.

“Each new idea passes through three stages.
First, people will ridicule it.
Second, it is violently opposed.
Finally, it will be considered self-evident.”

- Schopenhauer (1788 - 1860)

The Team

Founded in 2015, Xplain Data GmbH is 100% privately-owned and self-funded by the team. As a tiny team we have set off to develop some groundbreaking innovations in the context of big data and predictive analytics. Our mission is to change the game on how data is turned into intelligence.

We are not yet for the broad masses, but are looking for “visionary” or early adopter customers who – in close cooperation with us – want to bring leading edge intelligence into their portfolio and be a co-innovation partner to further develop our technology.

We are in particular keen on testing new innovation models such as “Excubation Innovation”. If you want to know more about combining established companies with small innovative start-ups and early secure access to strategic assets, please feel free to contact us.

Michael Haft

Dr. Michael Haft

has a PhD in Theoretical Physics and Neuroinformatics and more than 20 years' experience in developing analytics technologies at major companies like Siemens, Accenture and SAP. Before founding Xplain Data he worked as Chief Architect at SAP with a focus on Big Data Analytics. Earlier in his career, he co-founded a startup where he was responsible for the entire product lifecycle of analytics innovations. Michael gathered broad and unique knowledge spanning from database and BI-technologies to Statistics, Mathematics and Machine Learning. From numerous projects he knows how to apply those technologies in a business context.

Peide Wang

Peide Wang

has a Diploma in Mathematical Sciences and joined us from SAP where he last worked on predictive maintenance as a development architect. Throughout 13 years of business application development experience he gained extensive knowledge of different technologies and platforms, ranging from R/3 modules (ABAP) and database algorithms (C++) to modern web applications (Java/Javascript). At Xplain Data he designs and implements delightful user interfaces for our customers and brings his vast algorithm expertise to the table.

Hanjo Täubig

Dr. Hanjo Täubig
(Senior Data Scientist)

holds a diploma in computer science and got a PhD for a thesis on structure searching in protein databases. He worked as a substitute professor of theoretical computer science at the Chair for Efficient Algorithms. For several years, Hanjo taught at TU Munich on fundamentals of algorithms and data structures. He also gave master level lectures on computational biology and advanced network and graph algorithms. Hanjo is an expert in algorithms and data structures. In particular, he is a specialist for bioinformatics and graph/network-related problems.

Christian Koncilia

Dr. Christian Koncilia

holds a PhD in applied computer science for developing algorithms to deal with structural changes in Data Warehouses. He worked for more than 25 years in the Life Science and Health Care industry, working on projects for different hospitals, insurance companies and pharmaceutical companies. Christian has strong hands on experience with different Business Intelligence tools, database management systems and many programming languages and frameworks. During the last decade he focused on the development of different web applications.