In the heart of amazing software development, in the city Brno, we launched a blog to share our thoughts.
Docker images are like onions or orges and it can sometimes be a problem. Especially with VFS storage, where each layer is represented as a full filesystem. We present several ways how to squash multiple image layers to one.
As more and more web applications live on the client computer, it’s harder to update them right when you want. One of the solutions is to change resource file names after every update. Webpack can make this process straightforward.
This blog presents an example case, how an audit trail can be implemented with the MongoDB database. In fact, we implemented exactly this system in one of our older projects CRM.
There are many ways how to make sure the implemented software feature works. But sometimes, you need proof not only for yourself but also for your customer. What if the customer could monitor application functions?
Parsing Unirest response to JsonNode can lead to parsing error on Databricks if the top-level JSON structure is an array. We propose a workaround solution, which works best in custom Java libraries.
E-commerce platforms like Shopify contain their reporting dashboards which cover tons of figures generated by the shop, but it might not be sufficient for all architectures. We looked at the case when a customer wants to achieve real time analytics scaling seamlessly for peak periods like Black Friday.
The MGS team is now introducing a connector between MGS and Databricks. Teams governing the firm’s portfolio in MGS - definitely not only ML/AI models - get access to the details of model development and monitoring right at their fingertips. Teams using the Databricks platform can write code interacting with the inventory.
Can we use the Databricks as a runtime platform for complex applications using the standard development model with git, deployment pipelines and several independent modules / services?
The MRM methodology is relatively well-defined, including the concept of model monitoring for particular model types. But how can we take these mostly manual procedures and deploy them at scale for the entire portfolio?
End User Computing (EUC) items exist in every bigger organization and are a potential source of headaches and significant loses. So, what are EUCs and how do we save ourselves this trouble? Here’s my point of view based on our recent case.
The software during the development cycle will go through many stages. It usually starts as a source code in IDE, different pars will be executed in unit tests, maybe even whole components will be tested. It’s somehow packaged (compiled) and deployed to some server (staging, production,…). That is a very rough lifecycle which is not applicable in all cases, but at least during my time in CloseIT, many projects followed it.
When I started putting together my first AWS SAM project, I was confused with the project structure - as always, when I’m starting new project with new technology. You can easily make a bloated project where code is duplicated in each lambda function.