one of the major red flags, IMO, about the state of the brains of people in software development is that they don't seem to understand that repeating redundant and irrelevant words in code isn't an impediment to reading it.

it seems to me like among a section of the dev community, there is a form of cryptomania, where they suffer through this process of learning how to ignore half of what's on their screen, and because of that pain, they think everyone should be ok with that.

i'm not. i'm a literary person. i read prose and poetry and academic texts. if you wrote these kinds of documents the way most devs write code, the publisher would punt you out the door with a pair of crampons on their feet.

Reply to this note

Please Login to reply.

Discussion

IMO, it also proves that most devs are illiterate

i've seen better code written by people with second language of english than what most of the garbage i read.

maybe i'm spoilt because i use Go and the standard of literacy of go devs is a lot higher because the devs who built it also have old school literary/academic language skills.

the biggest reason, the first reason, why i rewrote most of go-nostr was because most of the names of everything is `nostr.` i'm like, shit, i didn't know that i was building a nostr fucking relay. thanks for remind me and polluting my screen with this noise.

not only that, NIP-XX is not english. if you want to write names like that, go and learn assembler where you can MOV and JMP yourself to deth.

fucking shaved monkeys

yeah, I like consise code as well.

I've been playing with React lately and there is a awful lot of line noise.

Ruby is probably the best as far as code aesthetic, but ofc it's not statically typed, and has worse performance than go. But they are meant to be used in different areas, so they are complementary not competing.

programming is programming. i don't buy this "used in different areas" thing. if it is all the same at the level of execution, then everything else about the language doesn't matter, except that it doesn't waste the programmer's time or impede debugging.

i have written GUIs in go and given the right API design it is no more difficult to do than in some language "designed for GUIs"

in actual fact, the origin of the majority of Go's syntax WAS a language designed for GUIs, using atomic FIFO queues (channels) called NewSqueak (a language for mice).

is there REALLY justification for this common expression about "languages for use cases" when everything needs if, for, functions, methods, types. the fact that you can write in Go almost the same way as if it was C for embedded systems using TinyGo tells me, nope.

- can do servers

- can do GUI

- can do embedded programming

no, i strongly disagree. i don't think there is anything that i can't do with Go, just that this silly attitude prevails and so people invent new languages with extra wingdings and features that bloat and bloat and bloat the compilation, impede debugging, make for unreadable code, require you to basically maintain almost identical code across multiple files (looking at you, C/C++) and the worst, is either the performance is abysmal, or the compilation time is eternal.

the facts are that as far as performance, Go is only like 5% behind java, C++/C and Rust. considering how abrasively the fanatics of these language consider GC to be, that tells me they are masochists who like to sit around twiddling their thumbs for 5% extra performance, and bragging rights. i think bragging rights is the main reason.

i only credit bragging rights to assembler programmers. everyone else can get off their arse and write their code properly, for humans to read, and not just retardedly complex rube goldberg compilers

the one area that Go is not so good is kernels. that's it. and that's only because you can't have a realtime system with GC.

servers are not realtime. latency costs of GC are amortised very well by Go, and it is only in extreme performance like realtime video streaming (again, notice, realtime) that to use Go for this you need to side-step the GC explicitly.

even then, i suspect that tinygo would work for kernels, because kernels and embedded software share in common a tight resource limitation and requirement for efficiency.

i used a programming language years ago called Amiga E which had a lot of the basic features of Go. mainly, the dynamic arrays. dynamic arrays are a pain in the ass and really long winded syntaxes in C++/C/Rust. it's stupid, because they essentially are just fat pointers. and why nobody else except java and go use interfaces? it's far more efficient, only double indirection for dereferencing interface types, and makes it really simple to write utility libraries like database engines and GUI interfaces.

Ruby is not the best at anything. It has way too many idioms that are there just because. Like backwards if statements. No one needed that and it reduces clarity.

so, how does that work?

do something something if something ?

what about else clauses?

at least condition?true:false is sorta reasonable (C and javascript have this type of syntactic sugar).

Yeah it only works for one-liners with no else clauses. Sure it can be "nice" but in my distant (15 years ago) experience Ruby gets so loaded up with sugar you can't figure out what the computer is even doing anymore.

I think half the reason that Ruby is dog slow is because the developers purposely write their code to hide implementation details to keep the interface "nice". Everyone ends up calling code with unknown costs that is doing a bunch of unnecessary magic.

Rails philosophy of "convention over configuration" is an example. If you do things in the conventional way it all just magically works. But to a developer unfamiliar with those conventions it is a black magic mystery why it works at all.

Also "everything is an object" is nuts. Sometimes a number it just a number, a struct is just a struct, a function is just a function. Any language that does the "everything is a ..." dance is nuts.

except when they say a function is a variable. that's a powerful distinction. for which reason basically all languages support closures now.

also i think you can boil things down to constant, variable, type, function/method (the receiver is just a convenience) and pointer/reference. to make things easy for humans, you can then attach a symbol to these, which gives you debugging.

objects are shit because they are usually 3+ levels of indirection which opens up space for misinterpretation, long compilation times, and pernicious, hidden errors.

yes, symbols allow debugging and reflection, which allows you to write code that writes code. this is how some interpreters implement execution of interpreted code, by assembling trees of closures and then loading their stacks

Objects are fine, but nesting them or endless inheritance is complete garbage. I shouldn't have to dig through 10 levels of inheritance trying to figure out which overload of a method is actually being called. Zend and by extension Magento are terrible at this. The later tried to make the core functionality extensible by non-programmers via an awful XML plugin language. By the time that was done being applied a view model could be any of about 20 similarly (or even identically) named things.

yeah, javascript has a lot of that, all OOP languages have a lot of that (which includes JS). the worst you have to deal with in Go is interfaces, but a good IDE symbol database makes that easy. Go is a bit harder to see it because of implicit satisfaction of interfaces, where java has explicit declaration of interfaces.

i've also got a bone to pick with this idea that languages cannot be general while covering specific uses well. i know about Godel's theorem and all that, and all languages are the same, but paradoxes and other anomalies are irrelevant to computers, that's called an error.

what i really don't like is how it seems to be underpinned by the "rules for thee and not for me" inherent in the "polylogism" of marxism. the idea that communication cannot happen between classes, for reasons.

this is bullshit.

even many kinds of mammals have rudimentary language abilities that enable humans to have a conversation with them that actually achieves something. dogs herding sheep, cats asking for dinner and clean the toilet.

i mean, even computers can communicate with humans through LLM programs. how can it be possible that two any different classes of thing can be different than a souped up calculator and a human.

there is only one class war:

the war of criminals against good people.

if there is no communication between two classes of things it's because one is a parasite and the other is its host.

computers are computers and computations are computations, so yeah in theory I agree with you. However some languages have Lindy effect in certain areas. A lot of effort was spent to develop frameworks on them, which became industry standards. You don't have to use them but that means a big disadvantage

yeah, in general, it's all about libraries and hardware interfaces.