Introduction to Big Data Concepts with Apache Spark

Introduction

This 7-day hands-on workshop introduces Apache Spark, the open-source cluster computing framework with in-memory processing that makes analytics applications up to 100 times faster compared to technologies in wide deployment today. Developed in the AMPLab at UC Berkeley, Spark can help reduce data interaction complexity, increase processing speed and enhance data-intensive, near-real-time applications with deep intelligence.
Highly versatile in many environments, and with a strong foundation in functional programming, Spark is known for its ease of use in creating algorithms that harness insight from complex data. Spark was elevated to a top-level Apache Project in 2014 and continues to expand today.

When and Where?

  • Days: 29 August 2015 - 19 September 2015 (every Saturday and Sunday)

Topics

Introduction to Data Analysis with Spark

  • What is Apache Spark?
  • Introduction to Core Spark Concepts
  • Working in the PySpark shell
  • Working with PySpark in an iPython notebook
  • Standalone Applications

Programming with RDDs

  • RDD Basics
  • Creating RDDs
  • RDD Operations
  • Passing Functions to Spark
  • Common Transformations and Actions
  • Caching RDDs

Working with Key-Value Pairs

  • Motivation
  • Creating Pairwise RDDs
  • Transformations on Pairwise RDDs
  • Actions Available on Pairwise RDDs
  • Data Partitioning. Key Performance Considerations

Running on a Cluster

  • Configuring a Spark Cluster
  • Deploying Applications with spark-submit

Structured Data with Spark SQL

  • The DataFrame API
  • Inner Joins and Left Outer Joins in the RDD API versus in Spark SQL

Building Interactive Data Analytics Apps With Flask and Spark

  • A Simple Example - Parameterized CrossFilter Histograms

Spark Streaming

  • A Simple Example - Stream of Integers / Moving Average

Advanced Spark Programming

  • Working on a Per-Partition Basis

Machine Learning with MLlib

  • Overview and Terminology
  • Machine Learning Basics. What is a Feature
  • The LabeledPoint Data Type
  • TF-IDF
  • Preparing The Data For Analysis / Stemming, Stopword Elimination
  • LogisticRegressionWithSGD / Filtering Spam

Exercises

  • The Complete Works of Shakespeare. Computing Word Counts
  • Detecting the 12-01-2001 Anomaly in the CrossFilter Data Set
  • Geographical Data - Analysis of City Initials per Country
  • Applying PageRank on a Subset of Wikipedia
  • Twitter Stream / Sentiment Analysis for Hashtags
  • The Brown Corpus (NLTK). Stylistic Classification with Cosine Similarity
  • Sensor Data. Detecting Tachycardia and Bradycardia in an ECG Stream

Registration is now closed

If you have any questions, please ask them here.

Prerequisites

This workshop requires a solid background in functional programming. Knowledge of Python is nice-to-have, but not mandatory.

Instructor

Dan Șerban

cppx9or.jpg

_____________

Participants

Cristian Valentin Buza
Mihai Chirculescu
Tudor Emil Coman
Costin Papuc
sesiuni/data_warehouse.txt · Last modified: 2016/05/30 23:22 by fbratiloveanu