Détails de l'annonce
Référence:
125499
Publiée le:
6 novembre 2020
Type de poste:
CDI
Lieu de travail:
Tunis, Tunisie
Expérience:
Entre 2 et 5 ans
Étude:
Bac + 5
Disponibilité:
Plein temps
Langues:
Français , Anglais
Entreprise:
anonymous logo
SESAMM
Secteur: informatique / télécoms
Taille: Entre 20 et 100 employés
Description de l'annonce:

Context 

At SESAMm, we provide tools for the asset management industry, based on our proprietary big data, artificial intelligence and natural language processing technologies. We analyze a huge amount of unstructured textual data extracted from millions of news articles, blogs, forums and social networks in real time. We use this alternative data in combination with standard market data to provide innovative analytics on thousands of financial products across all asset classes, and to develop custom investment strategies using our internal machine learning and statistical expertise. With more than EUR 8M raised since its creation in 2014, major clients across the world, numerous awards won and an exponential team growth, we are expanding quickly in Western Europe, Americas and Asia. 

Join SESAMm, an innovative and fast-growing FinTech company ! 

Job Description 

Overarching goal : you will build and scale data components to key SESAMm products, such as raw data ingestion pipeline, job scheduling and ETL design / optimization, optimize the migration the Product Data Platform toward cloud or on-premise solutions, and setup the best data development practices for other tech members.

Communicate the work of your team with weekly updates.

Key activities : 

- Design and implement best data pipeline for our Text-based products (ingestion, processing, exposition) : 

  • Test and design state-of-the-art data ingestion pipelines
  • Implement efficient streaming services

- Lead the acquisition of new data sources 

  • For each new data source, describe its feasibility and potential 
  • Integrate the new data into the datalake
- Develop data request tooling for Data Scientists and Technical teams
  • Ease the new data request engine
  • Optimize current queries
- Implement and maintain critical data systems 
  • Process and integrate data in new databases or datalake
  • Ensure maintainability and create update systems
Used technologies : Spark, AWS-EMR, Kafka, SQL, MongoDB...
Candidate Profile
Education Requirements :
Engineering school/university with specialization in IT, software engineering or data science. Other types of profiles are welcome to apply as long as they have significant IT experience. 
Work Experience and Skills Requirements : 
  • Work Experience : 2-5 years of experience in data engineering / any at-scale data processing experience. 
  • Good understanding of different databases and data storage technologies
  • Very good knowledge of distributed computing systems, such as Spark, on a stand-alone and cluster-basis
  • Good knowledge of cloud computing systems, such as AWS, GCP, Azure ML.
  • Development : mastering a language within Python, Java and/or Scala at least a knowlegde with Python
  • Good communication and popularization skills : understand technical team needs and issues, collaborate with several internal teams. Team player. 
  • Additional skills : strong interest in data science / Natural Language Processing.
You should be able to work in a product team and show high motivation. This job requires autonomy, curiosity toward a changing environment and real dedication to solving problems for clients.
Working conditions 
  • Location : Tunis
  • Duration : Permanent contract / percent time : 100%