Welcome On Mobius

Mobius was created by professionnal coders and passionate people.

We made all the best only for you, to enjoy great features and design quality. Mobius was build in order to reach a pixel perfect layout.

Mobius includes exclusive features such as the Themeone Slider, Themeone Shorcode Generator and Mobius Grid Generator.

Our Skills


Big Data services

Solutions to High Volume, Variety, Velocity Data


Hybrid Solutions

Pragmatic hybrid solutions using existing BI investments in parallel with emerging Hadoop/NoSQL/InMemory technologies

Business Assessment and Data Science

Discover where you stand in terms of data maturity and what value your organization can derive from existing and emerging data

Design and Implement

Design, architecture and Implementation in the Microsoft Big Data and Open Source Hadoop Stack


Presentation of data through collaborative, secure and mobile-ready portals

Technology Expertiese


Hive, HDFS, Storm, Mahoot, R

Windows Azure

Azure ML, Azure IoT, HD Insight, Power BI

SQL Server

Microsoft SQL Server, Parallel Data Warehouse, Power BI, SSxS

Case Studies

Connected Car

Connecting cars in the middle east with Law Enforcement and Insurers



Crash Test sensor analysis to make cars safer

Example Projects in Big Data

IoT for Manufacturers

Manufacturing investment in IoT Solutions went up 230% in the last two years.

Big Data in Retail

Integrated Inventory, Shrinkage Analysis, Warranty, Basket Analysis, Product Recommendations, Customer Intelligence 

Big Data in Insurance

Improve Claims Management, Risk Assessment, Fraud Detection, Customer Service and Operations Optimization

IoT in Transportation

Real Time Maps and Geo-fencing, Integrated Driver Dashboard, Predictive driver assessment, Customer loyalty Management

Latest Posts

Big Data Planning checklist document

August 31, 2017
There is a lot of hype about “Big Data” solutions with most of our customers. I looked at first a few years ago and I found most things to be very early stage with little genuine intent to implement from customers. However in the recent past, I have seen an increase in the number of jobs out there around the Big data space (in particular hadoop based solutions) indicating an increase in demand. The list below is really just a quick lists of thinking points if you are considering implementation of a Big Data solution Goals If you are like most organizations, you already have a BI solution in place. Basic reporting , or warehouses feeding reports to business. Most BI techies want to follow trends and go after the “new paradigm” and chase big data however you should think about whether you really need to process terabytes of data to process business goals have data that changes very rapidly and needs to be tracked and alerted on have a variety of sources that cant or shouldn’t be normalized schematically. We have a listing of typical Big data solutions by industry in our solutions catalog that you may want to browse to get ideas. Main choices As with all buzz words, there are many interpretations to what “Big Data” is. Main choices on what is available under this umbrella are: MPP: Basically relational servers on steriods Hadoop stack: HDFS (distributed file system), YARN (resource manager), HIVE (SQL-like query inerface to hadoop data) Hybrids: There are products out there that combine best of both options above. No-SQL: Store data in its native format, add schemas at query-time. You’re looking at things like Mongo DB, Azure Cosmos DB etc. Cloud vs On Prem With all these, you will find options on the cloud and on prem. The question on whether to go to cloud or on prem is one that is facing all aspects of the application stack and Big data platforms will be no different. Cloud platforms will typically add value on top of on prem options in terms of: Pay for use only (both in terms of storage and compute) Ease of scale out Every increasing feature set Ease of integration with other tooling like visualization, real time alerting, web and mobile front ends etc. Distributions One of the challenges with open source tools (or those with foundations in the open source world) is which distribution to use. For hadoop (perhaps the most popular Big data tech), for e.g.,  you can have distributions available from Hortonworks, Cloudera, Apache etc. These will add value to the baseline apache build in terms of dev tools, administration, security, deployment ease etc. This means that they usually trail the latest-and-greatest from the apache projects. With things changing rapidly in this space, you should carefully consider what features you need and whether your distribution has or intends to provide those. Cost of ownership Big Data is perhaps one of those things with the biggest collection of “free” tools

Hadoop Foundation III: Laying the architecture & tools that compliment Hadoop

January 29, 2016
You can also access Part I & Part II of this series “Laying the foundation of a data-driven enterprise with Hadoop“. Hadoop platform has performed well in the batch interactive as well as the real-time data processing if the core is Apache Hadoop. Recently, Hortonworks launched a new technology called Apache NiFi. It was created at the national security agency (NSA), U.S. It was called the Niagara falls and then turned into open source at the end of 2014 as the top level project for Apache today. That is really for managing data closer to its inception and through its flow and as it navigates to the system that it needs to get to; whether it gets stored at a Hadoop platform or whether a real-time analysis is applied to it as it is flowing. So, you get a very good combination of deep historical insights from a full fidelity of history as well as the ability to draw perishable insights that are here now and that combination of a feedback loop is important as people lay the foundation of a data-driven enterprise. So, you get a very good combination of deep historical insights from a full fidelity of history as well as the ability to draw perishable insights that are here now and that combination of a feedback loop is important as people lay the foundation of a data-driven enterprise. What about the data security? Finally, securing your data no matter what access engine you are using is also important. Whether you are using Hive to access data through SQL, or you are doing data discovery and machine learning (ML) and modeling using SPARK, or if you are doing real-time stream processing using KAFKA Storm and spark; it doesn’t really matter how you are interacting with the data, you want to make sure you are able to set-up a centralized security policy, and you are able to administer the policies, audit and who to give access and how you want to encrypt the data in flow and motion, as well as at rest. There are newer technologies in the platform. In the case of Operations, Apache Ambari provides the requisite security. In the case of governance, there are technologies like Apache Atlas and Falcon.PARTING THOUGHTS Parting thoughts So, the benefits of bringing a platform, like YARN-based architecture, are that you are able to bring diverse workloads under your management, and you are able to add new data processing engines on top so the ease of expansion is future proof. The consistent services that you can apply in your data and governance, operations and security so you will have a consistent centralized operation particularly as you bring in new workloads and data. And the resource efficiency when you have a mixed set of workloads  whether it is end users to use tools to issue SQL or using SPARK to discover new patterns or values. Vendors like SAS and Microsoft can run natively in Hadoop. It ensures that they are not monopolizing

Hadoop Foundation II: What type of data Hadoop can help me with?

January 29, 2016
Part I of the series “Laying the foundation of a data-driven enterprise with Hadoop” can be accessed here. Hadoop can be used with any type of data, whether it is batch interactive or real-time. It can be applied to any data whether it is a traditional system or coming from the internet of things (IoT) and be deployable anywhere whether it is on-premise, cloud, appliances, Linux, and Windows. A pretty important thing is that it enables a consistent experience for bringing that data together in a way that is interoperable with tools you already have. At the center of the platform is the technology we call YARN. We view it as a data operating system just like Windows with power of multitasking applications that run on top of it like Microsoft Office or Adobe Photoshop, so YARN is that sort of an operating platform for Hadoop that enables a wide range of data processing engines, open source as well as from partners such as Microsoft’s HDInsights, Talend and others that run natively on the platform to get benefits of the scalability. Hadoop: A modern platform For a modern data platform you need operations, security and governance so these capabilities are built in the platform. This way it is easy to manage, monitor and provision, on-premise or in the cloud, and manage high availability. It should also help manage the lifecycle of the platform as well as the workloads that are running on the platform and get active alerts when you need to do parent feeding for workloads. Data governance is clearly important to be able to manage data to add its life cycle or understand the linear algebra data. Hadoop is not different than any data system you have in your enterprise. Most of them participate in the data governance. Top use case: The Single View of X In the top middle, probably the top 2 use cases we see are the single view use cases. It is the single view of customer, single view of product, single view of the supply chain, and a single view of patients. Being able to collect disparate data arguably from silo data sets and bings them together where you can join them in a way that you haven’t been able to before is a very big use case to drive additional revenue or better care. The world of fast data, data in motion, as well as the rich historical data, deep historical machine learning and data modeling really underpins the predictive analytics. In many cases, you will see businesses transforming themselves with predictive analytics applications, so that’s the landscape of journey where folks will pick one and move to others in their journey. Build on top of simple use cases OR use them as reference points Single view use case To give you an example, lets have a look at Mercy Corps. Since, most of us are patients at a given point, in our birth or in our life-cycle, and they are really about delivering transformational

Hadoop Foundation: When to use Hadoop for a Data-driven enterprise?

January 29, 2016
Confluent raised $24 million for data ‘Streams’ powering LinkedIn, Netflix and Uber. Now, this is a company that is helping corporate giants like Netflix, Uber, and LinkedIn by letting them get new insights from their data using Apache Kafka. And, did you know why big data is becoming such a deal? Have a look at this short video to kickstart this article. Why is it important to get data-driven? The context here is that digital transformation is impacting every industry. People talk about the growth of data but do not debate about it. It can be: Sensors & machines typically referred as Internet of Things Geo-location Server Logs Cut stream social media Files and emails In tech. terminology, you have the non-relational database or non-traditional data management systems, then you have data coming from traditional sources (like ERP, CRM, PoS terminals) that you store in data warehouses. Both of these are increasing at a rapid speed. The question becomes: How do you effectively blend this information in a way that is transformational to the business? Now, don’t take “transformational” as a cliche. Savvy folks and companies are already using Hadoop (without naming it like us) since years but it still remains transformational for your company who has not implemented it yet. This blended data (coming from traditional & non-traditional sources) will help you to be more proactive with your customers and supply chain as opposed to reactive (like days, weeks, months) after the fact reaction. Opportunity The opportunity is to unlock the business value from a full fidelity of data and analytics across that data. Challenge The reality is that much of the new data exists in flight, so it is in motion and it is part of the systems and devices that are part of the Internet of Everything landscape. When you see fig 1 below, you realize that the  ability to consume data is a challenge (line in the middle). Another challenge is that how do you actively manage the data from as close as the point of inception, through its lifecycle, through real-time or historical analytics that you may want to apply to it? So, that’s the backdrop of many folks journey towards becoming data driven. How do companies start their Hadoop journey? The guys at Hortonworks see a clear pattern, particularly over the last few years, when they help bring Hadoop into enterprise IT infrastructure. Companies have adopted Hadoop ecosystem both from cost savings and unlocking transformational business outcomes perspective. These are the governing uses cases, if you will, that are common patterns. See in the bottom center part of cost savings (Fig 2 below). What do I begin with? Begin with ETL (extract-transform-load). Clearly in the center bottom cost savings segment, it is right-sizing your traditional world and prepare to bring in some of the IoT sources, in a way that you can do the transformation logic in a platform like Hadoop as opposed to your traditional platform. ETL use case There are significant cost savings

Big Data Skills: The super-set of a Data Scientist’s skills

October 27, 2015
It all began when participants in our Big Data training started yawning, apparently out of their want of a live action-packed big data assignment. A training was getting them bored since they wanted to jump into the sexiest job of the 21st century. But, when it came to a live assignment, faces started turning pale white. It was not about writing the next script or code. It was about a more holistic skill set that data scientists rely upon. While the participants are still struggling with their assignments, I decided to delve deep into the type of skills that data scientists typically bring with them. A lively three-hour huddle with our Data Science Lead gave me a good understanding of what is required of data science and big data professionals. I’ll share with you what I learned about the skills needed to become a data scientist. 1. Maths/Statistics Set theory Simply put, set theory deals with the part of mathematical logic that studies sets. Sets are essentially an informal collection of objects. It is the foundation of mathematics, and you will see big data applications that use various flavors of set theory such as classical set theory, rough, fuzzy, and extended set theory. It is applied both in SQL and NoSQL databases. Use-cases: It is heavily used in decision analysis, DSS (decision support systems), AI (artificial intelligence), ML (machine learning) and knowledge discovery from databases [1]. Specific applications include analysis and reduction of data for event prediction, data mining, and demand analysis in travel & supply chain industry. Data scientists also use it to analyze time-series data.  Ken North has put forth a good explanation of using extended set theory (XST) in big data applications.   Numerical Analysis Data scientists use numerical analysis to obtain approximate solutions while tolerating reasonable margin of error. You’ll be using a lot numerical linear algebra when performing data analysis. A good handle on algorithms that use the numerical approximation always differentiates a data scientist. Traditional numerical calculations are not sufficient to scale the numerical algorithms and may need the use of advanced techniques within Big Data context. Use-cases: Techniques such as matrix function evaluation and trace approximation are used to develop scalable numerical algorithms. Automobile manufacturers use advanced numerical techniques during rapid soft-prototyping. You’ll be dealing with broad numerical analysis techniques such as: Differential calculus Group theory Set theory Regression Information theory Mathematical optimization   Statistical methods Statistical analysis involves creating correlations using the interpolation or extrapolation techniques for random and noisy data. Become familiar with statistical tests, distributions, maximum likelihood estimators, etc. Use cases: Computational statisticians use the statistical techniques to scientifically discover patterns and trends. Some of the popular techniques are: Association rules Classification Cluster analysis Regression & multiple regression analysis Time-series analysis Factor analysis Naive Bayes Support vector machine Decision trees and random forests LASSO (least absolute shrinkage and selection operator)   Linear/Non-Linear algebra Doing big data, you’ll come across curves, straight lines and other oscillations formed by doing data analysis. Linear algebra is about vector spaces and