In the heart of amazing software development, in the city Brno, we launched a blog to share our thoughts.
I would like to discuss a problem with you that I see repeatedly encountered when performing complex, multifarious tasks such as bank stress tests: the lack of ability to concentrate on the "bigger picture" and the consequences this has on the analyst.
Let's say that the ETL tool PDI isn't very suitable for dynamic data processing. But you could probably say something similar about other ETL tools. At the end of the day, you need to tell the software how to transform the data. It is only logical that we also have to define the data structure.
Jira is a popular issue tracking tool and is also used in many businesses as a business process management system (BPM). Sure, there are more suitable tools like jBPM, but Jira is much more user-friendly. And even though it's less feature-rich regarding BPM options, it can be easily extended using plugins written in Java language.
When I started to work in CloseIT, I was eager to understand how the app works. Usually, I try it by trial and fail. On the other hand, each build takes time and it disturbs my workflow. I was wondering if there is a way to make it faster. It was my first work experience with Spring framework, but it didn't take a long time to find people with the same problem, and that's how I found the devtools from Spring.
Automation often represents the process of simulating human activity using computers or machines. It is a current discussion topic in many fields, and model governance is no exception. Our team has successfully implemented several process automations, including automation within the field of model risk governance.
One use-case of our CRM solution is to run custom calculations with custom ETL jobs prepared by the application users. This is a big advantage of CRM because users have better insight to business logic than developers, which are concerned mostly about the technological side of application.
One tricky task came recently out with feature we were implementing. Fetch Java logs in memory and save them to the database. This pretty clear goal turned out to be not as trivial task as we thought. At the same time, it turned out to be realizable in the short time after some research.
The number of models your organization must manage is increasing every year. Models are often registered in tools like Excel that are excellent for a particular type of task but are certainly not ideal as inventory tools to track current status, search for different criteria, or manage a variety of workflows associated with the model's life cycle.
Yet another boring conference attended by a sleeping audience with the highlight being a piece of mediocre cheesecake served during a refreshment break? Model Risk Management Europe 2019, held in London, was without a doubt the very opposite!
When we started with the implementation of IFRS9 standard, we had about a year to design and come with working and hopefully an extendable and robust solution. Since most of our developers are comfortable with Java and we built other projects using this language, the decision to built a core of application on Java Spring framework was easy.
In the previous blog post, I wrote about our problem with JVM CodeCache. This cache is used for saving compiled machine code and if your Java application generates new code on the fly, it will eventually fill.
As a Java developer, you don't need to know internal details of JVM most of the times. The virtual machine is not a trivial piece of technology and learning internal details can be scary. But there are times, when you find something interesting in the logs, which seems to be important. Like the message: CodeCache is full. The compiler has been disabled. Try increasing the code cache size using -XX:ReservedCodeCacheSize=