Big Data: HOW TO SCALE FROM ZERO TO BILLIONS!

real-time data analytics

How Big Data platform scaled from zero to billions of data within 6 months at ISCPIF (CNRS).


This talk contains our use of Elasticsearch, MongoDB, Redis, RabbitMQ and scalable/high available Web services built over Big Data architecture.

This presentation was presented at Université Paris-Sud, LAL, Bâtiment 200 organized by ARGOS. https://indico.mathrice.fr/event/2/overview

ISCPIF: http://iscpif.fr
Big Data at ISCPIF: http://bigdata.iscpif.fr
Climate at ISCPIF: http://climate.iscpif.fr
Playground for climate: http://climate.iscpif.fr/playground
Tweetoscope: http://tweetoscope.iscpif.fr

Big Data: What I am thankful this Thanksgiving!

big data at iscpif

Big Data platform at ISCPIF: This list represents what I am thankful to this Thanksgiving

  • Elasticsearch You know for search!

This is the heart and the soul of our discovery and curiosity. The first step of every data analytics is finding the right data. Behind every great data discovery there must be a great search engine 🙂 And this is where Elasticsearch comes to the rescue with all its advanced real-time data analytics and powerful full-text search abilities built over Apache Lucene.
ISCPIF Big Data platform uses Elasticsearch and its built-in distributed and high availability feature with over 900 million documents indexed in order to facilitate data discoveries and explorations for our researchers and scientific partners.

So I am thankful to Elasticsearch for the past 2 years, which made my life much easier and made me a better engineer.
Continue reading “Big Data: What I am thankful this Thanksgiving!”

Paris before and during terrorist attacks

 

Night of terror

The data from Twitter shows some upsetting stats about Paris terrorist attacks on 13 November.

As it can be seen, the query for terrorist attacks has no result before 22h30. Unfortunately this shows even without any knowledge of the event itself (media or news), it is indeed possible to assume something must have happened related to the requested queries.

Paris after attacks

Continue reading “Paris before and during terrorist attacks”

Building a real-time news analytics

It is always interesting to find out what exactly is happening around the world as they are happening. This feature of knowing something as it happens is so called “realtime” or “near realtime” because of the network latency, delay for processing the data, delay for visualizing and streaming the data, etc.

Now imagine you can monitor most international news agencies on Twitter in real time. What I have been developing for the last few months is an application that allows you to not only get important news and highlights around the world as they are happening but also persists the data for the window of 24 hours so you can always have the ability to read important news and highlights that have already happened.

Continue reading “Building a real-time news analytics”

Building Search Engine by ElasticSearch

I’ve started to work with ElasticSearch for a while now. I gotta say it’s a powerful open source for building distributed real-time search engines and analytics engines.

It also uses Shards and Replica in distributed machines to make your architecture reliable and scalable. It’s not just a simple full-text search engine even though it does that perfectly.

What I do with ElasticSearch is that it’s connected to my MongoDB replica set and it indexes tweets as they are streamed by Twitter API. I track tweets for few projects (health, news, UN GlobalPulse, etc.) and try to index some of my projects in ES on the fly.

Continue reading “Building Search Engine by ElasticSearch”

Source of Tweets in France and United States

So here is the thing: How people are sharing on Twitter around the world? What are the devices or services they usually use to share their check-ins, photos, videos, or updates on Twitter?

This is a really simple analytics I did on a data that I’ve been gathering for almost 2 months now (around 97 million tweets from US and 5.5 million tweets from France by the time of this study) to get some answers for the above question by using Hadoop batch processing.

I have 4 EC2 instances up and running 24×7 to track tweets (from Twitter Public Streaming API) and store them into MongoDB Replica Set. One of the nodes is an application server that I built by Node.js stack to process and visualise the stream as it comes to the system in a real-time. Currently I have average of 100 tweets/s, minimum of 30-40/s, and maximum of 180-220/s. There is more than one Twitter account at the same time to tracking tweets by locations and different keywords. That’s why I get more than 1% of the entire stream sometimes!

Continue reading “Source of Tweets in France and United States”