Avatar
Paul Graham / @paulg (RSS Feed)
09ecfafa65d246a218dea31b941a0a7ff06d4d82e10ca7487176f61a035084fe
Twitter feed for: @paulg. Generated by nitter.moomoo.me https://nitter.moomoo.me/paulg

The Collisons are playing the long game. That's one of their secret weapons.

https://nitter.moomoo.me/paulg/status/1638522295743393792#m

**Response to @paulg:**

AI vs EU, for example. Who wins? I can imagine them acting quickly to create regulations that shoot themselves in the foot, and I can imagine them getting the right answer, but too late. But it's hard to imagine them getting things right quickly.

https://nitter.moomoo.me/paulg/status/1637023378392465411#m

**Response to @paulg:**

One of my tests of the magnitude of a change is: How does it change what advice I'd give my kids? And when I try to answer that question, it's scary how little I can predict. The best I can do is tell them to surf this wave rather than be crushed by it.

https://nitter.moomoo.me/paulg/status/1637018635519574018#m

Jessica: What's new?

Me: OpenAI just released a new version that's much better. And people hadn't even stopped talking about the previous version yet.

Jessica: Are you scared about AI?

Me: A little. At the very least, it's going to change everything.

https://nitter.moomoo.me/paulg/status/1637015386334941185#m

"If banks were suddenly forced to liquidate their bond and loan portfolios, the losses would erase between 77 percent and 91 percent of their combined capital cushion. It follows that large numbers of banks are terrifyingly fragile."

washingtonpost.com/opinions/… (https://www.washingtonpost.com/opinions/2023/03/17/svb-first-republic-bank-credit/?utm_source=twitter&utm_medium=social&utm_campaign=wp_opinions)

https://nitter.moomoo.me/pic/card_img%2F1636766464056098816%2FlywMeiag%3Fformat%3Djpg%26name%3D800x419

https://nitter.moomoo.me/paulg/status/1637009635356950531#m

**RT @d_feldman:**

On the left is GPT-3.5. On the right is GPT-4.

If you think the answer on the left indicates that GPT-3.5 does not have a world-model....

Then you have to agree that the answer on the right indicates GPT-4 does.

https://nitter.moomoo.me/d_feldman/status/1636955260680847361#m

**Response to @paulg:**

I'm not saying it was a mistake for Paris to become more bicycle-centric. It's probably good for the city, and its people. But it does feel significantly more dangerous to walk there.

https://nitter.moomoo.me/paulg/status/1636802463226314752#m

Though this seems obvious in retrospect, adding bicycles to Paris doesn't turn it into Amsterdam. You still have the aggressive chaos of Parisian traffic, but now it's silent and could be coming from any direction.

https://nitter.moomoo.me/paulg/status/1636802038255190041#m

**Response to @paulg:**

I think the reason is that we're so sensitive to differences between people's faces (since we need to be able to identify them) that this ordinarily dominates our perception of group photos. If we invert the photo, it's turned off, so we see the similarity.

https://nitter.moomoo.me/paulg/status/1636801094868140043#m

It's easier to see how fashion and convention (about e.g. how to smile) make everyone look the same if you turn pictures upside down.

https://nitter.moomoo.me/paulg/status/1636800174696562689#m

**Response to @paulg:**

Though they may be composed of upstanding individuals, the structure of large organizations causes them to behave in ways that would be evil, if they were an individual person. So remember, whenever you're dealing with a large organization, you're also dealing with an evil one.

https://nitter.moomoo.me/paulg/status/1636796706615328780#m

When you're doing a deal with a large organization, find out if the people you're negotiating with actually have final say. Usually they don't, and that means the deal you've agreed upon can be, and often is, killed at the last minute by higher ups.

https://nitter.moomoo.me/paulg/status/1636794001750900736#m

**RT @gdb:**

Easy to miss in the GPT-4 blog post — accurately predicting model capability, using 1,000x less compute than the real run.

A critical aspect of AI safety will be predicting when various capabilities will arrive — still nascent but taking a real step in that direction.

https://nitter.moomoo.me/gdb/status/1636493356699357184#m

**RT @ycombinator:**

We're hosting a meetup for Open Source Software startups! YC has funded over 75 to date and we’re especially excited about them.

If you’re interested in starting an OSS company someday, join us to connect & learn with other founders in the space.

➡️ apply.ycombinator.com/events… (https://apply.ycombinator.com/events/346)nitter.moomoo.me/i/web/status/163… (https://nitter.moomoo.me/i/web/status/1636489217575624704)

https://nitter.moomoo.me/ycombinator/status/1636489217575624704#m

**Response to @paulg:**

Considering how much most people hate and fear writing, I can imagine a future in which a few thousand organic writers generate the data used to train the AIs used by all the rest.

https://nitter.moomoo.me/paulg/status/1635673666112430082#m

**Response to @paulg:**

I meant that originally as a sort of joke. But after saying it out loud, as it were, I realized that with current models of AI, at least, the more people shift to using AI, the more influence accrues to those who don't.

https://nitter.moomoo.me/paulg/status/1635673030100676615#m

A decade or so ago I stopped knowing in advance that the Oscars were happening, and would have to ask afterward who won. This year I stopped even asking. The Oscars happened and I don't know who won. (In fact, I had to check whether they'd even happened.)

https://nitter.moomoo.me/paulg/status/1635344175356051458#m

At first I didn't see the point of adding view counts to tweets, but there is a use for it: I'm less tempted to respond to some replies. If someone says something dumb, but only 4 people even saw it, who cares?

https://nitter.moomoo.me/paulg/status/1635307011603841024#m