i think we're only gonna be seeing more of them tho, microservices are great for llm development since they fit into context, large monorepo is harder to tackle

Reply to this note

Please Login to reply.

Discussion

NGL talking about this makes me glad I've retired. Was knee deep in the shit show a very long time.

I'm an embedded systems dev. Firmware cannot crash, especially when it's running an engine or a pacemaker. To design firmware with microservices, performing basic functions over whatever network connection you might have, would be insane.

Separation of concerns is not the same as distribution of concerns. Whenever you add a communication channel, you add a failure point and a delay, a measurable and minimum delay. Maybe that delay is small when you run 1000 microservices on the same machine, but then when it's time to "scale" across the network you increase your latency by orders of magnitude, even when you scale within the same datacenter.

To distribute concerns across the network is a valid design pattern, but it's not like waterfall v.s. agile where you maximize the "best thing" about a process, because the "beat thing" about software is not the network. Networking is a tool that has utility and tradeoffs and it always increases complexity.

The term 'microservices ' implies going ham with little networked backends while pretending like distributing logic in this way has no costs.

Independently developed and deployed backends have a time and place, but it certainly is a costly pattern that should be used as a last resort.

I have yet to see a case at any scale where backend code should not be developed together in a monolithic codebase, even when it is deployed in a distributed architecture for various needs (job processing vs request/reply backends).

The notion of microservices is a lazy event pipelining architecture. It's better to just use a RTC microkernel framework inside your application and break out scalable pieces with proxy placeholders or a pub-sub rally point like over zmq or redis.

Exactly.

Evented production typologies can be achieved without losing the code reuse of developing everything together in the same repo.

You can have thousands of engineers sharing code via functions while also deploying the entire codebase in a myriad of different production modes to execute procedures differently for different purposes.

Just because Kafka consumers need their own persistent process that's seperate from request/response architecture, does t mean those functions need their own source repository and deployment systems, etc.

Reuse everything.

seen.

Micro services seems to be just another instance of Conway's Law.

https://en.m.wikipedia.org/wiki/Conway%27s_law

Some organizations are composed of many many single responsibility teams and so they evangelize that the way to do things is their way.

I think a lot of what happened is that Google and the like never sufficiently open sourced their monorepo secret sauce, and that led to (dumb) engineers adopting familiar structures, like GitHub orgs with many different repos that they could easily deploy in a 1:1 fashion with cloud tooling.

Fast forward a decade, and the "microservices" monstrosity has become normalized.

i think also the huge VC driven hyperscaling moment in software development and creating all these huge companies who were hiring 100s of engineers per month who ran fast and broke things meant that it was easier to make new hires productive is every new team essentially had a green field project vs onboarding to monorepo and getting familiar with everything

I work in a 100M ARR majestic monolith daily and do not get this feeling. It’s all just subfolders and context management. In fact wonder if microservices _prevent_ LLM from getting big picture in many cases.

i guess it cuts both ways, but now with models with huge context windows that is becoming less of a constraint