Data engineering and BI
We have identified while working with clients across industried that the hardest part on BI reporting, ML or model development is often not the task itself, but the data management around it. The data engineering is often the most challenging part of the problem.
HOW CAN WE HELP WITH DATA ENGINEERING AND BI CHALLENGES
Our data engineers and software developers help you to build a solid data architecture. We could help you with:
Design of your cloud architecture
Building your data lake or data warehouse
Data preparation and cleansing for your BI analytics or model development
Automation of the data ingestion
Custom reports and dashboards
Building of data transformation and calculation engines
We have identified, developed, and successfully implemented solutions for both large and small companies, always with the emphasis on maximizing the business value.
In early 2018, a new methodology for the calculation of provisions for Expected Credit Losses (or ECL) under IFRS9 for the banking sector came into force. Our team was tasked with implementing a solution for the retail credit exposures of Raiffeisen Bank International (RBI). The calculation is used to check the plausibility of results provided by each RBI subsidiary bank.
We have created a unique solution for recurring stress tests in credit risk Raiffeisen Bank International, which allows us to define our own test scenarios for simulation of portfolio development and calculations of individual Risk Parameters RWA or ECL. Our application was successfully used for EBA stress tests in 2018 and continues to be extensively used for other stress testing needs by internal or required controllers.
FROM OUR BLOG
E-commerce platforms like Shopify contain their reporting dashboards which cover tons of figures generated by the shop, but it might not be sufficient for all architectures. We looked at the case when a customer wants to achieve real time analytics scaling seamlessly for peak periods like Black Friday.
The MGS team is now introducing a connector between MGS and Databricks. Teams governing the firm’s portfolio in MGS - definitely not only ML/AI models - get access to the details of model development and monitoring right at their fingertips. Teams using the Databricks platform can write code interacting with the inventory.
Can we use the Databricks as a runtime platform for complex applications using the standard development model with git, deployment pipelines and several independent modules / services?
The software during the development cycle will go through many stages. It usually starts as a source code in IDE, different pars will be executed in unit tests, maybe even whole components will be tested. It’s somehow packaged (compiled) and deployed to some server (staging, production,…). That is a very rough lifecycle which is not applicable in all cases, but at least during my time in CloseIT, many projects followed it.
When I started putting together my first AWS SAM project, I was confused with the project structure - as always, when I’m starting new project with new technology. You can easily make a bloated project where code is duplicated in each lambda function.