• Build and dispose cutting edge Data Products and POCs using massive sets of data by following agile development principle.
• Ensure the data product is disposed with through testing and is of highest quality
• Communicate the results to other members of team in a clear and intuitive manner
• Interface with leadership and team members regularly to brain storm ideas, communicate progress and risks.
• Use JIRA and Confluence as tools for Agile Project planning
• Update and communicate the progress and results with other stake holders in clear, concise and timely manner
•Other duties will be disclosed upon application
•2-5 years of related experience in Big Data environment specifically Hive/Spark where you have deployed reliable models that scale smoothly on high-volume (1TB+) & high-dimensionality (500+ variables per schema) data.
• MS or PhD in Mathematics, Computer Science, Applied Computers, Engineering, or Economics (required)
•Highly enthusiastic in uncovering actionable insights from data and conveying these insights to business as stories that stick.
•Experience with SQL & (Python/Spark/R) to deploy machine learning products into production.
• Experience in Telecom/retail/e-commerce/consumer packaged goods (CPG), Mobile or Consumer electronics industries