How is Oracle still a thing?
Discussion
Banking Sector Enterprise customers that never upgrade.
Name some open source alternatives
They make the clients suffer from:
As a former AE, enterprise banking customers and their willingness to drop on-premise prices to the floor if you commit to buying cloud 😭
No shit. I have to use it to submit bids and I feel like I’m in the matrix. Gotta keep those kpi report cards flawless lol. Ghey.
Huge long term contracts, especially on "maintenance", with large enterprises and governments that are too large and afraid to change.
CIA ftont. No wonder enterprises find themselves "stuck" with oracle for 30 years.
Or maybe those companies just accidentally forgot to ever evolve their stacks. I guess that's possible too 🙄
You're missing the point that they're still around, but some of their constantly-evolving competitors no longer are.
That was exactly my point. They're a strategically important natsec asset, designed as such, and supported accordingly. Market competition...takes a backseat for things like this.
companies founded and supported to do work for military have infinity money to do whatever they like. apple, google, microsoft, boeing, airbus, mcdonald-douglas, etc etc. they have their market dominance because they get all the money they need to pay all the lawyers to lobby the government and basically outlaw competition.
if you look closely at the history of tech companies that got big and then disappeared, there is always funny little strange things happening, like Commodore, for example, was infiltrated and they basically took the company apart. their tech was too far advanced, i mean, any amiga fan would be able to relate to this - the computer had graphics acceleration coprocessors that would not see wide use in the rest of the industry until about year 2000, flickerless sprites, bulk memery copy and splicing processors... until about 2000 windows mouse cursor flickered. apple's didn't because they used a simple technique involving bitmasking, which i figured out how to do on a TRS-80 CoCo back in like 1987 just by reading Byte magazine descriptions of things and other programming oriented magazines at the time. even now we still don't have vblank sync that works effectively for 3d graphics but most 2d graphics apps now do actually have vblank sync (stopping tearing and flicker)
these military backed companies basically have slowed down progress of tech, and combined with the endless ways in which they prevent people entering the industry, mainly copyright laws and EULAs we have seen tech slow down massively since the inception of the microcomputer in the 80s.
even now, here on nostr, i can almost guarantee you that primal and maybe other major nostr projects have got links to people in VC who are actually working for military. primal especially, you look at the vibe and style of them they smell like spooks.
Government contracts
Ever tried migrating one? They're often absolutely gigantic, full of procedures and complex views and classes (tables), and function as de facto interfaces for myriad other applications.
And you usually have to migrate in production, or run it in parallel with the new one, for months.
And for little gain, other than decreasing your maintenance work by increasing the chances of joining the AWS 7-hour outage.
Also, look at the type of applications still running Oracle. It's like with COBOL: all the stuff that CANNOT GO DOWN, DON'T MOVE IT EVER, EVERYONE'S LITERALLY GONNA DIE.
Fun fact: COBOL was my second programming language and Oracle was my first DB. We regularly get requests for maintaining both of those things, and it pays well. 😊
this is the plague of centralized databases and something that the nostr architecture can fix. most of the solution relates to distributed replication and multi-server fetching strategies, though the tech being used for this is still pretty primitive on most relays.
They often capture and distribute the events during the input/output step, to have multiple instances of the same data set, at different company offices. Their events have unique keys, now.
It's all moving toward the Nostr concept, it's true.
yup... all that's needed is some more experimentation with distributed dynamic cache strategies and more use of the pub/sub model... the sub side is there already, and can be relatively simply made to run, but push side needs to be there too, or at least, a model of subscription in which the emitted events are in a queue and when a subscriber drops, they get sent the backlog they miss in the interim. this isn't really that complicated to implement either, in fact, i wrote a sub thing using SSE but all i have to do to make it resilient is create a primary subscription queue and progress monitoring of subscribers and a slightly different contract than the best effort that is the current standard.
i will make this in my rewrite of realy too... it will be a relay side thing, a separate "sync" endpoint that will have a thread maintaining the IDs of recently stored events in a cache and per-subscriber queue state management that will always send events, and the receiver will ack them instead of the current scheme where it's fire and forget
semi started me thinking towards this direction when we came up with the idea of creating an endpoint that allows a client to know the internal sequence number of events, as this allows pull side queuing, but i think push side queuing would work even better.
with just this one feature added, you can have a whole cluster of relays all keeping up to date with the latest from each other, with multiple levels of propagation as well as bidirectional so for example two relays can stay in sync with each other in both directions, this also requires extra state management so that they don't waste time sending events to subscribers that they got from subscribers in the other direction.
the other thing that is required also is that relays need to have configurable garbage collection strategies, so that you can have master/archival relays which get huge storage, and smaller ones that prune off stuff that have stopped being hot items to contain their utilization, so, archive relays and cache relays.
and then, yes, you further need a model of query forwarding so a cache relay will propagate queries to archives to revive old records, the caches could allocate a section of their data that is just references to other records, stored with the origin of the original, now expired event, that also is maintained within a buffer size limit, so they know exactly which archive to fetch it from.
lots of stuff to do... i started doing some of this with the original "replicatr" my first attempt at a nostr relay, implemented a whole GC for it, wrote unit tests for it... the whole idea was always about creating multi-level distributed storage. unfortunately no funding to focus on working on these things, instead i'm stuck building some social media dating app system lol
this is one thing that sockets can do better, because they don't necessarily send events all at once. i wrote the filters previously such that they sort and return results all in one whack, i think what you probably want then is for each filter, in the response you identify the query by a number, and the client always maintains an SSE channel that allows the relay to push results.
with this, the query can then propagate, all the results that are hot in the cache are sent, and if there was events that required a query forward, those results can then get sent to the client over the SSE subscription connection.
i really really need to have some kind of elementary event query console to do these things, a rudimentary front end. i probably should just make it a TUI, i think there is at least one existing Go TUI kind 1 client... i should just build with that, instead of fighting the bizarre lack of adequate GUIs for Go
Yes, as a matter of fact, my first job was about migrating a Delphi + Oracle ERP system where all the business logic was coded as Stored Procedures to PostgreSQL. Things like 3000 line SQL queries and 10,000 like PL/SQL triggers and functions where common. It took us a one year, but it worked and saved the company millions of dollars in licenses alone.
So, the license had run out? They often run a decade...
Yep, they had a system for 2 decades and decided it was not worth paying. The 3 of us did the full conversion in a year. And I was a junior, so.. super cheap.
The long contracts is one major reason why nobody bothers. I've read that they sometimes offer a discount, if you agree to not use anything else for the life of the contract, too. Otherwise, you pay the difference and a penalty.
We're about to enter the second year of planning our next big Oracle migration. LOL Every time it gets close, they get cold feet and delay again.
The criticality of the system is off the charts, so any error would end up front page news. I get it. But, at some point, you have to just bite the bullet.
I don't know how big your system is but my experience is that this is a lot easier than what it seems, especially if you have good testing. I was able to build tests and translate 40 plsql stored procedures every day. After you learn the gotchas, the rest is easy. And I can only imagine that the language now is a lot similar than what it was back in the day.
Legacy and inertia
The guy overtook Musk in Forbes lol
autonomous (now ai enabled) database is a big thing. like really big.