Big Data Engineer
Design, development and maintenance of Big Data infrastructure for Machine Learning tasks, mainly production training/prediction sets (Kafka, Flink, Spark)
Development and maintenance of Deep.BI Big Data platform (Kafka, Flink, Druid)
Level: Senior (more than 5 years of experience)
Location: Warsaw, Poland
About Deep BI, Inc.
Deep.BI is a data platform for media companies. It can save up to 95% of money needed to build and maintain an in-house big data solution and years of development.
The data plays a fundamental role in every aspects of media including:
new product development
increasing audience engagement
monetization (subscription, ads, branded content)
Deep.BI makes data collection, integration, storage, analytics and usage easy. It reduces all the complexity needed to implement big data technology and thus minimizes risk and cost. We use modern, real-time stack including Node.js, Kafka, Flink & Druid. We built our own HA, hybrid data cloud (now ~400 cores) and we're scaling it horizontally.
We also experiment with conversational user interface for our analytics platform, where customers get insights provided by chatbots. Also, as a next step we work on bot-2-bot communications to automate processes (RPA - Robotic Process Automation).
We're a young startup and yet a small team of enthusiasts, with solid financing from well known business angels having firsts big media customers from US and Europe.
We invite the best, passionate people. Let's talk and find out if there's a fit.
You will design, develop and maintain a Big Data infrastructure for various Machine Learning tasks
You will help to expand our current platform capabilities and architect new strategies and applications
You will shape the future of what data-driven media companies look like, drive processes for extracting and using that data in creative ways, and create new lines of thinking for financial success of our customers
You’ll apply and advise teams on the state-of-the-art advanced Big Data tools and techniques in order to derive business insights, solve complex business problems and improve decisions.
Degree in a quantitative discipline (i.e. Statistics, Mathematics, Econometrics, Computer Science)
Hands-on experience in designing and developing Big Data infrastructures preferably in a marketing and sales context
Advanced knowledge of data management tools including NoSQL, SQL/RDBMS (e.g. Druid.io, Cassandra, HBase, AeroSpike), Hadoop, Spark, Flink and/or other big data technologies
Experience in stream processing with Kafka, Spark Streaming, Flink
Programming skills in at least one of Java, Scala, Python
Intellectual curiosity, along with excellent problem-solving and quantitative skills, including the ability to disaggregate issues, identify root causes and recommend solutions
Market salary, different types of contract available + paid holidays (20 or 26 days)
Work in a young startup with solid financing, among passionate and friendly people
Private medical care
Stock option plan
Flexible working hours, possibility of occasional remote work
Each member of the team has real influence on the product - state of the art big data & AI platform
Great office location - beautiful co-work on Senatorska street
You believe there's a fit - apply now!
Any doubts - drop us an email: email@example.com