DB OPTimize

What a Shock! When You Suddenly Start Working With Other Databases..(Part 1)

Back in the old days, many organizations were working with only one sort of database, and for many DBAs their knowledge-base was wrapped around this one specific database. This is hardly the case anymore. As data is massively growing, as data type are from all sorts, and as market needs are rapidly changing, organizations are now holding many kinds of databases, relational and non-relational, for all sort of needs. The days when you can stay as only one vendor DBA are long gone. It’s a natural change. The rapid reality will suck you into it even if you don’t do any proactive move toward it by yourself.

In case your massive DBA’s skills were evolved for many years with Oracle DB (like myself), and only then you started working with other databases, you have probably experienced some kind of a shock. I know it was a shock to me. And why was that?

From DBA perspective, Oracle now days basically exposes everything a DBA might need. It wasn’t like that in its older versions. It started around Oracle 10g and still keeps improving. If a problem occurs, you have all the data you need for investigating the situation and solve it. You can tell what happened, both from system perspective and application perspective. It’s all there. CPU, I/O, Network, and much more. And I’ll emphasize it again: most important, you can see it from application perspective, not just system wide perspective. All you are looking for can be found in v$ views, DBA views, DBA_HIST (WRI$) views. There are so many of them. And the data is always there in your basic day to day work. Without the need to manually start a profile only after a problem occurs.

You can tell what the database and the application is currently doing in every second, you can see whether it’s behaving normally or suffers. You can tell what it was doing an hour ago, a week ago, and even a year ago (depending on your retention period). And when you have such an history of such an important information, you will always know what is going on, and you can always spot when things start getting rough.

You can easily tell when the issue started, easily identify who run it, what is going on behinds the scene (assuming you know what you are looking for and where to find the answers to your quests). You just simple have it all.

And then I started working with SQL Server. And then I started working with Postgres. And then I started working with MongoDB, and so on. It simply happened. And I just couldn’t understand and couldn’t accept the huge gap between Oracle to the other databases. It felt like I am sliding from having everything to having almost nothing. No history, no application perspective, not being able to see how things behave yesterday and compare it to how they behave today. Is it good? Is it bad? God knows. Not being able to do some data mining on the situation. Being absolutely blind to what the application was doing inside the database. So much crucial information is not kept. I found myself straggling and it felt like I needed to do my job when someone was tying both my hands behind my back. It was truly frustrating.

But I am stubborn. When I believe in the importance of something, I will not rest until I achieve what I need. I strongly believe that historical performance statistics are crucial for any business, and also crucial for a DBA to be able to perform in its best. And in the absence of such… well, I can assure you, no good things will happen here. So I knew what I was looking for (after all, a database is still a database, and they all behave, more or less, the same), therefore I started digging, reading a lot, and of course, as always happen when digging, found (partially) what I needed. But sometimes I discovered that amazing as it is, no such thing exists, so I start building my own.

For instance, I wanted something similar to the data being kept in Oracle’s dba_hist_sqlstat. Dba_hist_sqlstat keeps historical performance statistics for statements execution. When it run, how many times it run, how much time it took, which execution plan it used, how many buffer gets, how much cpu time being used, how many physical reads, whether parallelism was used, and many more (read more about it in a previous blog I wrote: yes-sometimes-it-is-the-database-fault).

In SQL Server, for example, I found the magnificent Query Store, and I use it a lot. It helped in many crucial cases. While in Postgres, there was no such thing. So, I simply had to built my own. 

Curious?

In my next two articles I will explain how to work with SQL Server Query Store, and what can be done in Postgres.

www.linkedin.com/pulse/what-shock-when-you-suddenly-start-working-other-databases-kedem/

To be continued…