There was a time where desktop client software connecting to the RDBMS server directly was all the rage, necessitating that the RDBMSes of the era have networking capabilities to listen for those clients. But is there any point in a modern database providing that when we've all but completely moved the client->server networking into the application layer?
As cyclical as tech can be, I find it difficult to think that we'll go back to that style of architecture. There are just so many benefits to having an application server sit in between the client and the database. Going back to green screen terminals hanging off a central computer seems more likely, and that model doesn't need its database networked either.
Not having a network-capable database limits you to only one server machine, which is a problem if your application is more than just a thin layer on top of a database. You usually want a separate worker process (or multiple) to do heavy computation in the background and often, it's useful to run it on a dedicated machine.
For example, you might run the app server and DB on an expensive highly available server, but keep your background workers on cheaper spot instances that might randomly get killed. Or you're running some heavy processing that needs different hardware like a GPU for machine learning.
You could, of course, implement API endpoints for that in your app, but then you need to keep updating it when the workers change. Or you could implement a more generic DB access endpoint, but then you're just reinventing a networked DBMS, but with worse performance and no library support.
> Not having a network-capable database limits you to only one server machine
Not true. What has been true until recently is that replication solutions for SQLite have been lacking, necessitating the use of an old school database to lean on existing multi-machine database solutions, but that is no longer the case.
> You usually want a separate worker process (or multiple) to do heavy computation in the background
Also not true. Network overhead introduces things like the n+1 problem which only adds unnecessary computation and breaks the relational model, requiring some pretty insane hacks to work around. What is true is that SQLite write contention has been a problem, necessitating the use of an old school database in high write environments, but that is also a problem on its way out.
Networked databases have been the norm until recently because they're older and more mature and the only practical solution in many cases thanks to that maturity, but SQLite is starting to gain the same maturity and we are now able to rethink the model and gain the benefits of data locality.
Besides, if you really need networking for your niche use case, and have some reason to use SQLite, there is already rqlite. Tightly coupling networking with the database engine doesn't add any value. They are decidedly distinct layers of concern. If Postgres or MySQL were rewritten from scratch today, even if protocol compatible, no doubt the separation between the database engine and the networking layer would also be more explicit.
While that's true, I also think that the current rage is stateless applications, outsourcing state to centralised database/cache/object storage layers over a network connection.
That allows you to easily spin up/down e.g containers, migrate the application between nodes etc without having to maintain a high performance network file system.
Using sqlite in such an environment requires you to either solve the persistence layer in general (≈ rook/ceph) or specialised for sqlite (≈ litestream/litefs/rqlite depending on your need). I think one probably could argue that rqlite is essentially a network protocol for sqlite.
This really depends on what you mean with "application". It is very popular to have the "backend" stateless, yes, but it is also popular that each service has their own dedicated database, and to consider the service/application to be the combination of the stateless backend and the dedicated stateful database.
The application is stateful even if the backend is stateless.
While that is very true, I think my point still stands.
Even if you consider the application to be the combination of stateless binary + stateful auxiliary services (like a database), the statelessness of the binary allows for some neat things.
I replied to a comment talking about how network based database access is not necessary, and I think it very much is if you want to host stuff with the current popular architecture. Unless you implement a general shared persistence layer on which you can run e.g sqlite, but that dosen't seem better/easier to me.