Increasingly, people are demanding highly tailored services to compensate for their busy lives. To address this demand, computers and software need to understand individual interests. Typically, “big data” statistical models are applied to solve this problem today, but often the big data required to compute statistically relevant results is simply not available to anyone not named Google or Facebook. Even for those juggernauts, scale is only achieved by relying on the grouping individuals into buckets of like interests (i.e. friends in a social graph). This makes a lot of assumptions about individuals, and often those assumptions are wrong. How often is content in the “recommended for you” section of a website relevant to your interests? As people demand better and more personalized attention, the need to solve this problem is a big multi-billion-dollar opportunity.
The Primal Solution
Primal is an artificial intelligence (AI) solution that can derive meaning in the form of knowledge models in real time and from sparse information. This ability to derive meaning, particularly individual interests, is one of the most disruptive innovations of this decade and solves the problems that current expensive big data solutions can’t solve.
We need more talented engineers like you to help us build this new class of product.
You will join a small team of smart, self-directed engineers, product managers, and designers. You will have the opportunity to define your solution, and own the product deliverables that you commit to.
The Senior Data Scientist will be primarily responsible for the development of the Primal AI platform and continuing to improve our core AI through NLP and ML.
- Instrument new and existing applications to acquire data for analytics and machine learning
- Build and maintain pipelines to clean, extract and transform data
- Integrate and deploy models within our production code base
- Write production code with our developers and provide support to data scientists
- Regular participation in our software development process (sprint planning, architecture & design discussions, unit tests, code reviews, continuous integration)
- Working with NLP libraries and analytics of unstructured data and text corpora to build Primal vocabulary
- ML model building and feature engineering to build and evaluate predictive models
- Collaborate directly with software developers and data engineers, as well as business stakeholders
- Build prototypes to evaluate new features in the technology
- Strong coding skills, preferably in Scala and Python
- Familiar with NoSQL (e.g. Mongo DB) and SQL
- Experience with machine learning libraries like Spark MLlib, Scikit-Learn, TensorFlow
- Bonus: Familiarity with natural language processing, information extraction and/or information retrieval
- Experience working with RDF, knowledge graphs and creating data visualizations
Skills that would be nice to have:
- ELK stack
- API Integration, SaaS (Azure Cloud)