As organizations gear up to implement Data Governance, Master Data Management and Meta Data Management Databases should AI themselves. Memory is cheap so think of every SQL Query being tracked in a central repository and so the unique row identifier of the SQL Query (DML and DQL) is stored in-memory. Irrespective of the number of joins, number of where conditions, analytic functions and so on.
Kind of Confusing ain't it. Let me explain. Think of this SQL Query.
SELECT department_id, last_name, salary,
RANK() OVER (PARTITION BY department_id ORDER BY salary) RANK
FROM employees WHERE department_id = 60
ORDER BY RANK, last_name;
Obviously traditional databases would suggest to bitmap index deptartment_id column looking at cardinality for faster access. Indexing is nothing but storing the exact location of the row in memory (Data Object, Data Block, Data File and so on). Online Indexes are rebuilt online while the DDL is performed without the need for a DBA to rebuild indexes.
Now think of this. If the above SQL's result set originating rowid's (Employees Table Department No 60 as well as RANK function) is pre-populated in memory all the SQL Engine has to do is deal with rowid's for future instances of the same SQL Query.
In case of the above query when there is an update of an employee's department number all the SQL Engine has to do is update the SQL's result set rowid's (DML of rowid's for all SQL Query References that are stored in memory). Future instances of the same SQL Query will be rowid's processsing from memory. This will pave way for Real-Time ETL as well. Most importantly, OLTP to OLAP and OLTP DML resulting in rowid updates in memory that makes OLAP Querying Real-Time at the instant updates that flow from the OLTP System.
Think of Mergers, Acquisitions and the amount of efforts spent in integration. Also think of Hyperlocal delivery where one warehouse can willfully work with multiple partners.
Until next time OpenAI of Databases: Memory is cheap, Processing is Costly...