Holistic.dev is static program analyzer and knowledge mining tool. We mine insights about the data relations structure in the whole project based on the database schema and DML queries. This knowledge allows us to automatically control relations consistency and provides tools for automatic issues finding.
Our product is not operation metrics analyzer
- we do not analyze database config (like planner settings, buffers size and other)
- we do not analyze execution plans
- we do not do manage replication or connection pools. There are a lot of tools for these purposes, and most of them are really good. After all, we don't even need a direct connection to any part of your infrastructure.
There is a lot of knowledge inside a combination of database structure and DML queries, which hide from a man. And we mine it. What kind of knowledge are they? There are DML statement result
- fields types
- fields nullability
- lines count category represents one of 5 values: none, one, one or none, many, many or none
Based on that basic knowledge and dependency graph we can mine more complexity knowledge like
- expressions that can use indexes Of course we cant say what indexes will be exacactly using in each execution, because it depend on lot of runtime details like cache, table statistics, cpu load and more. But we can specify the necessary condition, not the sufficient. If there is a situation when it is impossible to use the index, we will indicate it.
- used relations and fields
- inefficient JOINs clauses
- sublinks that supposed to be changed to JOIN
- list of possible runtime exceptions
- excess expressions, aggregate function calls or ORDER clauses
- always true or always false clauses
- unsafe architecture patterns
- and much more
And inverse - we can find expressions that can't use index ever; mentioned, but unused tables, unused data-modifying common table expressions, and more.
The machine can take all complexity of database structure and queries and do the holistic analysis of SQL statements execution result without actually executing. And process a lot of databases at the same time.
Based on all that knowledge, our solutions can prompt developers and DBA what they should change to make queries faster, remove excess queries complexity, and save consistency of the whole project if changes have been made in some part of it.
We're currently supporting PostgreSQL with most modern syntax changes (v13). You can analyze all PostgreSQL syntax related databases.
We support all extensions from this list. It means we know about all functions and types provided by these extensions. TimescaleDB is also supported.
Zero. It's free. Now we aim to get your feedback and make our service more usable, more effective, and more powerful.
Never, no! We don't even need a direct connection to your database. We analyze only the SQL source code of queries that you provide by yourself.
Unfortunately, no. Our holistic approach based on knowledge about database structure. We link knowledge about database schema and about the query to identify all deep connections and suggest the most effective ways to optimize your SQL code. Some times better way is making changes in the schema instead of the query.
Sure! You can automatically and continuously upload all queries from pg_stat_statements to holistic.dev API or integrate your CI tools for check queries before it ships to production. It works with on-premise and managed databases.
The static SQL query analyzer can significantly reduce the time spent on searching for performance problems and searching for problematic architectural patterns. Detailed recommendations for optimizing queries from the production base. Upload the database schema (DDL), start the automatic upload of Slow Query Logs, and receive notifications of necessary optimizations right in Slack! Over 100 recommendations for improving performance, over 600 recommendations for changing the architecture.
The exponential growth of the cognitive complexity of projects makes the task of keeping track of all process details harder for code engineers. Being forced to switch between projects is becoming more time-consuming every day, routine leaves no place for creative tasks, all of which leads to growing job dissatisfaction and slow task performance.
One of the biggest challenges in backend application development is working with databases. It's necessary to keep in mind all the intricacies of working with the database, data structure, maintain the relevance of types on the application side, and resolve many other issues.
We create solutions for automating routine tasks that engineers face when creating applications interacting with a database. Up to 50% of engineers' working hours can be spent on tasks that can be automated.
- SQL as part of application – seamless integration of query types and application types
- Code generation - automated generation of application code based on queries for your language and framework (js/typesctipt/flow/golang/php/python/java/c/c++/c#...)
- Boost productivity by cutting down on routine tasks
- No need to keep in mind all the intricacies of the database structure – that's what we call a triumph over cognitive complexity.
- Early errors detection
- Save testing time during development
- Help with debugging
- Microservice-friendly – automatically generated data contracts in JSON, BSON, MessagePack, Protocol Buffer, Thrift, Avro, Cap'n Proto, FlatBuffers format
- Well-suited for forks (such as Citus, Greenplum, Timescale)
Benefits for engineers of all grade levels:
- Junior - Training (working with a mentor)
- Middle - Evolution (adoption of best practices)
- Senior - Support (working with a team member and assistant all in one)
Tools by role
- BACKEND DEVELOPERS Automated code generation Look out for performance and architecture issues, prevent runtime errors
- QA Automated test generation Increase test coverage
- DEVOPS Automated migrations when merging branches
A team leader's work becomes significantly more straightforward with the team's developers working on product tasks instead of fighting a losing battle. Our solutions help us speed up the code review process, obtain code metrics previously unavailable, improve employee satisfaction, and increase productivity. Up to 75% of engineers' working hours can be spent on tasks that can be automated.
- The multiplier effect of using solutions for teamwork
- Prevent errors before compiling and testing
- Automated data mining provides more profound and more accurate insights than human analysis
- Code quality metrics – you can't improve what you can't measure
- Reduction of developer onboarding time and context switching time, because you don't have to keep in mind all the details of the database structure
- Well suited for cloud projects
- Positive feedback about the processes within the team reduces the cost of the engineer hiring process
Build a better engineering culture
- Increase Team Velocity
- Reduce Cycle Time
- Boost expertise of developers of all grades
- DocOps practices
- Increase team productivity
Improve code quality and QA
- Detect and rectify errors in the early stages of development – production bugs are too expensive to fix
- Unify code style and architectural patterns
- Increase code maintainability
- Reduce test running time
- Increase tests coverage
- Minimize code review time
The more new functionality can be delivered during the sprint, and the more hypotheses can be tried in parallel, the more efficient is the product manager's work. This feature depends on how efficiently the engineering team works. Our solutions speed up the development process, have a positive impact on the time allocated in the sprint for bug fixing, and allow you to implement more features during the sprint.
Minimize time to market
- Develop new features faster
- Ship product faster
The average budget for an engineer, including taxes, office, and equipment expenses, is about $ 100,000 per year. This amount may be twice as large, depending on the region. Revenue Per Employee - The revenue coming from an employee is usually twice or three times bigger than labor costs.
Increasing an engineer's efficiency by at least 10% reduces costs by $ 10,000 per year or adds an extra $ 20,000-30000 per engineer to revenue. These numbers increase when evaluating the work of the teams as a whole. Coding and fixing bugs directly or indirectly related to the database can take up to 50% of engineers' working time.
Moreover, the expense is reduced due to early detection and fixing of errors and performance issues. Studies suggest that the cost of eliminating errors in the development phase is up to 500 times lower than at the production stage.
The cost of static analysis solutions usually does not exceed the salary of one junior engineer for every ten employees of the company. Based on target performance improvement, the ROI of a static analyzer can be up to 30x, and speed up Time to Market by an average of a month throughout the year.
Save on infrastructure costs
- Reduce testing infrastructure costs
- Boost database performance - you can serve more customers using existing equipment
- Reduce the risk of data leakage
- Reduce the risk of data loss
- We do not need an actual database connection - we parse only the source code of DDL/DML queries
- We do not need your application source code
- We never publish your source code in any publicly available sources
- We never publish reports on your project issues in any publicly available sources
Yes, it is possible. Well done!
At present, 5% of the rules described above are implemented, and it's quite probable that we will be able to detect something later. Stay tuned, and we will automatically send you a report if the new rules still find problems in your requests.
Besides, at the moment, we do not diagnose errors that may occur at runtime. If a non-existent table is used in a query, then the system will not be able to deduce the types, but it will not show a warning about the missing table. Runtime error reports will be added soon.