The best and most innovative software solutions for actuarial and data modelling is not proprietary but open source and supported by big technology companies. These solutions have been set up to be efficient, cloud friendly and scalable in order to handle vast amounts of data and typically remain up to date with the latest technological innovation.

For this reason we build on the shoulders of tech giants. We focus on building the bridges and integrations required for actuarial and other use cases as well as to ensure that all non-functional requirements are met.


We offer bespoke cloud native (or on premises) solutions for actuarial and other workloads.

  • Bespoke solutions for data manipulation, transformations (e.g. financial projections) and loads (ETL)
  • Bespoke end-to-end workflow automation solutions
  • Cloud-native and cloud independent
    • Efficient cloud-based infrastructure architecture
    • Not dependent on any cloud provider
    • Serverless, leading to significant cost savings
    • Can also set up on-premises
  • Excellent support
  • Open architecture and built on cutting edge well-supported software stacks


We have chosen our software and recommended programming languages carefully by researching what is the most widely used.

  • Enabling software developers to assist with actuarial modelling
  • Most common / widely used software skills include Python and SQL
  • Use what actuaries and others get taught at university (undergrads learn R and Python)
  • Lower onboarding cost and time for companies
  • Bridging the gap between the best big data tools and the financial services industry
  • Enabling companies to store more data at a lower cost by using data lake technologies
  • Enabling efficient processes and reporting on growing data volumes
  • Well supported open source software stacks with many developers less expensive support
  • Enabling serverless computing in the cloud
  • Enabling companies to be cloud provider independent


Our software stack includes Apache Spark, Hadoop, Hive, Airflow, NiFi, Zeppelin, Superset, Delta Lake, Jupyter, PostgreSQL and MariaDB.

Our modular software stack has been set up in Docker containers and can be orchestrated with Kubernetes.

  • Apache Spark
    • General ETL, including financial projections, unified data framework
    • GPU enabled ETL and financial projections
    • Cloud-native, on-premises distributed computing up to thousands of cores
      • Including local machine computing
    • Support SQL, Python, R, Scala, C#, F#, Julia and other languages of choice
    • Abstracting querying database and other data sources (only need to know ANSI SQL instead of SQL specifics for each database option)
  • Apache Hadoop
    • Efficient and robust storage
    • Cloud-native, on-premises distributed computing up to thousands of cores
  • Apache Hive
    • NoSQL database with SQL query capabilities
    • Enable setting up cloud-native data lake
  • Delta Lake
    • Reliable and auditable data (versions get automatically stored)
    • Lower storage requirements for data (+-6x storage savings potential) by utilizing columnar storage
  • Apache Airflow
    • Scheduled / Orchestrated Python scripts with progress visualization and logging
  • Apache NiFi
    • End to end process automation of different solution components
  • Apache Zeppelin, Jupyter Notebook, JupyterLab
    • Web-based notebooks for building quick POCs or MVPs
    • Allow visualization of data
    • Allow teams to build and work together
  • ProstgreSQL and MariaDB
    • Open-source relational database options
  • Docker
    • Each component is available as containerized applications
  • Kubernetes
    • Orchestration of containerized applications with the aim of enabling serverless computing (start containers when required and shut them down when no longer used).


We have partnered with Symbyte to bring customers to the forefront of scalable computing with the following services:

  • Modern, non-admin desktop apps. Turn your Excel analysis sheets into apps, backed by all the power of Python analysis.
  • Cluster-based analysis. Analyse billions of records in single-digit minutes.
  • Infinitely scaling and cheap backups and archiving. Never worry about disk space again. Store your CSVs, forever and cheap.
  • Migrate your software to the cloud. Never worry about users messing with each other’s software ever again. Create 30 servers or 1 50, it’s a click of a button.
  • Scalable GPU enabled ETL and financial projections


We can help clients switch to using the best end-to-end toolset in the industry, starting with tackling the most significant pain points.

  • Data central architecture
  • End-to-end workflow process/pipeline design and recommendations
  • End-to-end data lakehouse/data warehouse design and recommendations
  • Data processing (ETL) automation
  • Data-driven decision-making automation, including applying business rules, ML or AI
  • Design efficient cloud-based infrastructure architecture
  • Assist with setting up POCs, MVPs
  • Assist with productionisation of processes
  • Industry independent robotic process automation
  • Actuarial Modelling as a Service (AMaasing)


We have partnered with Dupro and Actuartech to be able to offer online interactive training sessions to clients for skilling up with actuarial and data science topics, including SQL, Python, big data toolsets and the IFRS17 reporting standard.

  • Assist with upskilling your teams with a deeper understanding of IFRS17
  • Assist with upskilling teams with the best of class toolsets for data manipulation and modelling
    • SQL
    • Python
    • Apache Spark, Jupyter notebooks, Zeppelin notebooks
    • GPU enabling data reads, write and transformation as well as financial projection and machine learning models
  • Assist with upskilling teams to better understand how machine learning and artificial intelligence can add business value
  • Getting practical with using big data toolsets to solve typical actuarial problems