Capbridge Pte Ltd
Country : Singapore
  • Freelance

Freelance Data Scientist

CapBridge is a private institutional capital raising platform that connects institutional and accredited investors to mid to late stage growth companies. Our platform provides a secure and effective environment where investors can confidentially access growth stage companies seeking financing. The CapBridge platform efficiently tailors specific deal types to each investor, showing them only the deals that meet their investment criteria. Our platform also integrates useful tools that optimise time and resources. Functions include due diligence, deal flow management, secure data-room, FAQs, closing, and completion checklists. Our innovative digital platform is fully integrated with leading industry databases. With access to over 5 million records, the platform provides a sophisticated and targeted environment for intelligent deal sourcing and matching. Capbridge has a partnership with the Singapore Exchange (SGX) that allows us to facilitate and accelerate the IPO process for growth companies. Our management team is from multiple disciplines, including venture capital, investment banking, technology commercialisation and portfolio management. Our vision is to build Capbridge into a world-class marketplace for investors and companies to achieve their growth objectives in an intelligent, secure and efficient manner

Responsibilities

Responsibilities:

  • Data Collecting & Management – Collect, normalize and aggregate both structured and unstructured data from multiple company & investor database sources
  • Algorithm Development - Build and fine-tune existing algorithmic models for various business use cases - leveraging machine learning, predictive analytics etc…

Requirements

Requirements:

  • Familiarity in building end-to-end machine learning / predictive models utilizing data mining, machine learning, and/or regression analysis
  • Familiarity with database modelling and data warehousing principles with a working knowledge of SQL
  • Track Record working with building and aggregating large data sets using Data pipelines / APIs
  • Strong programming skills in at least one scripting language, preferably Python
  • Familiarity with Big Data Frameworks such as Spark, Hadoop, RedShift, etc

Total applicants :15 Job posted 13 days ago Total Views : 124 Unique Views : 124 Today Views : 10


Submit Application