Sunday, 21 September 2014

DATA SCIENTIST

Step 1: Graduate from a top tier university in a quantitative discipline
. Education makes a huge difference in your prospects to start in this industry. Most of the companies who do fresher hiring, pick out people from best colleges directly. So, by entering into a top tier university, you give yourself a very strong chance to enter data science world.
Ideally I would take up Computer Science as the subject of study. If I didn’t get a seat in Computer Science batch, I’ll take up a subject which has close ties with computational field - e.g. computational nueroscience, Computational Fluid Dynamics etc.
Step 2: Take up a lot of MOOCs on the subject – but do them one at a time
This is probably the biggest change, which would happen in the journey, if I was passing out now. If you spend even a year studying the subject by participating in these open courses, you will be in far better shape vs. other people vying to enter the industry. It took me 5+ years of experience to relate to the power R or Python bring to the table. You can do this today by various courses running on various platforms.
One word of caution here is to be selective on the courses you choose. I would focus on learning one stack – R or Python. I would recommend Python over R today – but that is a personal choice. You can find my detailed views about how the eco-systems compare here.
You can choose your path – but this is probably what I would do:
  • Python:
    • Introduction to Computer Science and Programming using Python – eDX.org
    • Intro to Data Science – Udacity
    • Workshop videos from Pycon and SciPy – some of them are mentioned here
    • Selectively pick from the vast tutorials available on the net in form of iPython notebooks
  • R:
    • The Analytics Edge – eDX.org
    • Pick out a few courses from Data Science specialization to complement Analytics Edge
  • Other courses (applicable for both the stacks):
    • Machine Learning from Andrew Ng – Coursera
    • Statistics course on Udacity
    • Introduction to Hadoop and MapReduce on Udacity
Step 3: Take a couple of internships / freelancing jobs
This is to get some real world experience before you actually venture out. This should also provide you an understanding of the work which happens in the real world. You would get a lot of exposure to real world challenges on data collection and cleaning here. 
Step 4: Participate in data science competitions
You should aim to get at least a top 10% finish on Kaggle before you are out of your university. This should bring you in eyes of the recruiters quickly and would give you a strong launchpad. Beware, this sounds lot easier than what it actually is. It can take multiple competitions for even the smartest people to make it to the top 10% on Kaggle.
Here is an additional tip to amplify the results from your efforts – share your work on Github. You don’t know which employer might find you from your work!
Step 5: Take up the right job which provides awesome experience
I would take up a job in a start-up, which is doing awesome work in analytics / machine learning. The amount of learning you can gain for the slight risk can be amazing. There are start-ups working on deep learning, re-inforcement learning – choose the one which fits you right (taking culture into account)
If you are not the start-up kinds, join a analytics consultancy, which works on tools and problems across the spectrum. Ask for projects in different domains, work on different algorithms, try out new approaches. If you can’t find a role in a consultancy – take up a role in captive units, but seek a role change every 12 – 18 months. Again this is a general guideline – adapt it depending on the learning you are having in the role.
Finally a few bonus tips:
  • Try learning new tools once you are comfortable with ones you are already using. Different tools are good for different types of problem solving. For e.g. Learning Vowpal Wabbit can add significant advantage to your Python coding.
  • You can try a shot at creating a few web apps – this adds significant knowledge about data flow on the web and I personally enjoy satisfying the hacker in me at times!
Few modifications to these tips, in case you are already out of college or hold work experience:
  • In case you can still go back to college, consider getting a Masters or a Ph.D. Nothing beats the improvement in probability of getting the right job compared to undergoing a good programme from top notch University.
  • In case full time education is not possible, take up a part time programme from a good institute / University. But be prepared to put in extra efforts outside these certifications / programmes.
  • If you are already in a job and your company has an advanced analytics setup, try to get an internal shift by demonstrating your learning.
  • I have kept the focus on R or Python, because they are open source in nature. If you have resources to get access to SAS – you can also get a SAS certification for predictive modeler. Remember, SAS still holds the majority of jobs in analytics!


Thursday, 18 September 2014

Big Data for Industries

4 key layers of a big data system - i.e. the different stages the data itself has to pass through on its journey from raw statistic or snippet of unstructured data (for example, social media post) to actionable insight.
Data sources layer
This is where the data is arrives at your organization. It includes everything from your sales records, customer database, feedback, social media channels, marketing list, email archives and any data gleaned from monitoring or measuring aspects of your operations. One of the first steps in setting up a data strategy is assessing what you have here, and measuring it against what you need to answer the critical questions you want help with. You might have everything you need already, or you might need to establish new sources.
Database storage layer
This is where your Big Data lives, once it is gathered from your sources. As the volume of data generated and stored by companies has started to explode, sophisticated but accessible systems and tools have been developed – such as Apache Hadoop DFS (distributed file system), – or Google File System, to help with this task. A computer with a big hard disk might be all that is needed for smaller data sets, but when you start to deal with storing (and analyzing) truly big data, a more sophisticated, distributed system is called for. As well as a system for storing data that your computer system will understand (the file system) you will need a system for organizing and categorizing it in a way that people will understand – the database. Hadoop has its own, known as HBase, but others including Amazon’s DynamoDB, MongoDB and Cassandra (used by Facebook), all based on the NoSQL architecture, are popular too. This is where you might find the Government taking an interest in your activities – depending on the sort of data you are storing, there may well be security and privacy regulations to follow.
Database  processing/ analysis layer
When you want to use the data you have stored to find out something useful, you will need to process and analyze it. A common method is by using a MapReduce tool. Essentially, this is used to select the elements of the data that you want to analyze, and putting it into a format from which insights can be gleaned. If you are a large organization which has invested in its own data analytics team, they will form a part of this layer, too. They will employ tools such as Apache PIG or HIVE to query the data, and might use automated pattern recognition tools to determine trends, as well as drawing their conclusions from manual analysis.
Database  output layer
This is how the insights gleaned through the analysis is passed on to the people who can take action to benefit from them. Clear and concise communication (particularly if your decision-makers don’t have a background in statistics) is essential, and this output can take the form of reports, charts, figures and key recommendations. Ultimately, your Big Data system’s main task is to show, at this stage of the process, how measurable improvement in at least one KPI that can be achieved by taking action based on the analysis you have carried out.


Saturday, 13 September 2014

Analytics to Utility Industry


1.     Analytics can help make the grid safer. 
Utilities need to beef up security when it comes to smart grids. A breach in smart grid security could spell disaster for utilities. Analytics could be used to help detect if someone is in the grid framework, tampering with meters and other tools. They could also see energy being improperly diverted. It’s already begun to happen and will continue. Smart grids are a powerful too, but they’re also highly vulnerable. Analytics could help maintain and beef up security
2. drive innovation and ultimately job growth. 
IT professionals are always in demand, but especially when it comes to the development side of the smart grid.
“Market for IT solutions will be one of--if not the--largest categories among all of the components of a smart grid. “The only larger category is transmission upgrades, which will be necessary for the grids of the future, but aren't 100% tied to actual 'smart' technology.”

3. Better understanding how the grid works allows for better efficiency.
Analytics help understand exactly how the grid works and is able to identify trends. Using this knowledge, utilities can help drive efficiency and better utilize their resources. This is something that must be done, too.
“Growth in smart grid is no longer a luxury for utilities—aging infrastructure, retiring personnel and the proliferation of distributed generation resourceS
4.Forecast severe weather events and other emergencies that could disrupt the grid.
One of the biggest issues with aging grids is the fact that anything can take them down at any given time. With analytics, these events can be better predicted and prepared for. Severe storms are more and more common, making grid outages more common.
“Analytics enable operators to monitor and report the exact times of service interruption at each system endpoint and use these results to measure improvement in restoration time from automated distribution processes,” . “This allows utilities to identify and restore outages more rapidly, without having to rely on customer inquiries.”
This means faster response times and faster repairs.

5. Manage the load balance.
As grids age, properly balancing the power distribution can sometimes be a bit tricky. However, by using smart grid analytics, power can be distributed as needed in an effective manner. .
Transformer load management analytics can utilize smart-meter data and actual
 to continuously monitor and analyze distribution transformer loading levels and report on asset health, helping utilities make informed decisions to balance loads,”