ENGIN™, an asset analytics accelerator, provides utility asset management planners with a powerful tool to gather the insights they need for their entire power system. ENGIN™ accelerates all the analytical workflows required for planning, and models the system of interconnected assets, from generating source to customer load that brings forward the intel that utilities need to make better business decisions.
We can identify two general classifications of data in our application:
- Tracking data: created by the system in response to user interactio. e.g. notes assigned to different entities within the system
- Analysis data: business analytics generated through the core workflows in ENGIN
We do not use any Object-Relational Mapping (ORM) within our project. While an ORM may be standard for the "tracking data" which is subject to typical CRUD operations (create, read, update, delete), it would overall be infeasible and overly complex to implement and maintain for our "analysis data"; due to large data volumes, a high degree of flexibility among different users, a focus on performance skewed towards read operations, and most read operations involving data operations (aggregations, etc.).
- Tracking data: we have instead chosen to implement lightweight abstractions to perform necessary CRUD operations
- Analysis data: custom fit-for-purpose solutions for different components of our application, based on the unique requirements for different components
- Main technology: Java
- Main database: SQL database via PostgreSQL
- Key frameworks: Spring / SpringBoot
Our lightweight abstraction for CRUD operations allows us to execute SQL queries to read data from our database and map into a java data model. Sometimes we then convert the data model into a different Data Transfer Object (DTO) for different reasons: e.g. perform further analysis, or reducing unnecessary information before sending over an Application Programing Interface (API). This latter workflow is more common when accessing analysis data. After a recent feature build-out we found we were frequently reading analysis data from the database into a data model (using our custom lightweight abstractions) and immediately casting into a separate DTO, often reading from the same normalized database table(s) to create different DTOs. This created redundancies and sub-optimal performance when dealing with larger datasets.
We decided to create a new implementation for these specific operations: execute a SQL query and return results in any custom class. This is strictly read operations. We narrowed down to four functional requirements for these operations:
- Read a single custom DTO
- Read a list of custom DTOs (single class)
- Read return a single data point as a single data type
- Read return a list of data points as a specific data type (single class)
Requirements 3 and 4 are simplifications from requirements 1 and 2. Beyond functional requirements above we also had requirements to be able to autowire this reader as a bean in Spring Boot for easy use across the application.
With these requirements we would be able to quickly and easily create a Java DTO and read data from the database into this DTO. There are no restrictions to the underlying persistence structure except for complexity of the SQL query itself: entire single table, partial single table, join and read from multiple tables, perform aggregations / database operations and read results, etc.
Our requirements can be summarized in implementing the following interface `SQLToDTOReader.class`.
Using this interface, we implemented class `SQLToDTOReaderImpl.class`. The class uses three annotations:
- `@Slf4j` for logging
- `@Component` to create spring component
- `@Transactional(readonly = true)` for readonly data reading
The implementation needed a connection source, for which we used `DataSourceUtils.getConnection()`. This method requires data source bean to get connection from Spring's managed connection pool.
An example of the complete interface implementation is provided in a Gist Link [SQLToDTOReader ](https://gist.github.com/mominsamir/f965603a2d431cd9a26b31e6017ead8c). We will focus on one method in this article: fetching a list of DTOs.
Mapping SQL result set to a Java DTO
The bulk of the lifting is performed by an extra method called `autoMapModel` which maps the `ResultSet` into instance(s) of the DTO class. This method reads the relevant class properties for the DTO, converts from from database types into Java data types, and finally creates an instance of the DTO class.
We want this method to support different database types and different Java types used within our DTOs, and also not enforce any unnecessary column order constraints in the SQL query itself. Psuedo-code:
First we understand the structure of the result set and DTO class:
- `metaData` is used to get total column count as well as column labels from the (PostgreSQL) resultset
- `propertiesMap` holds retrieve field name of class, see `Util.convertToPropertyMapper` for more details.
- `dataMap` will be populated with java objects for each property of the DTO
We read the column label from the SQL statement and map to a property of the Java DTO, throwing an exception is the property is not found in the class. Here you may want to support unique logic for column nomenclature vs. Java class property nomenclature. This example uses lower_snake_case for SQL column names and lowerCamelCase for Java class properties.
We handle different PostgreSQL data types and map to the appropriate java objects. Many data types here are handled, but different applications may have different requirements (also database extensions may need to be accomadated if custom types are added). Results are inserted into the `dataMap` based on string parameter name.
Finally, we construct an instance of the DTO. `BeansUtilsBean` library helps converts from a map of key/value pairs into the DTO with the support of a few generic functions (whicha re outside the scope for this blog). Note that we also support enumeration conversion, following convention that all enumerations in Java are in UPPERCASE format.
Putting it all together
A fully constructed version of the code can be seen in this Gist Link: SQL To DTO Reader
Once the implementation is completed we can inject this `SQLToDTOReader` interface wherever it is required and use the appropriate methods appropriate method to read data and map to DTOs.
This implementation has helped our team to create less boilerplate code as well as add new implementations quicker and easier. This blog presents an example use-case which may need to be expanded for different reasons:
- Support for more data types to cover all types used within your application (including custom types)
Article written by Samir Momin. Samir is a Senior Software Developer, Technical Lead with Engineered Intelligence with over 14 years of experience in software development with a technical background and expertise in the development, maintenance, and end-to-end testing of client-facing software applications. Samir brings his full-stack expertise and experience working with a vast number of technologies and frameworks to design and implement the best solutions for ENGIN™.