Avatar
John Dee
fe32298e29aab4ec2911c0dbdda485c073f869c5444ee92f7ae247ed20516265

Haha yeah. Fancy name for a spike with a pressure gauge on top. You can do the same test with a piece of rebar sharpened on one end. When it gets hard to push into the soil that's about 150 psi, the limit of what plant roots can push through.

No dig for the last few years, and applied some high-quality compost last year. I think the good biology in the compost is building up the soil food web in the soil.

Replying to Avatar John Dee

https://void.cat/d/6zJseJek9a6dAmmgsBZYUv.webp

It might helpful, or at least interesting, to know how a stable diffusion checkpoint was trained. I grabbed this script from reddit and packaged it up for easier use. It compares two checkpoints and shows the tokens with the largest difference between them. These are likely to be the words used more frequently during fine tuning.

https://github.com/zappityzap/tokenator

The image was generated with 11 of the top tokens from the Photon checkpoint and a random prompt from the One Button Prompt extension. Three of the Photon tokens were: fancy steel mask, and the rest were nonsense words or word fragments: zzle elis eha fol hep spon abia wbo

The artists selected by One Button Prompt are important to the style, but the image style resembles neither of the artists.

I generated txt2img at 768x768, adjusted levels and curves in gimp, and brought it back to img2img for upscaling with ControlNet Tile and Ultimate SD upscale.

txt2img prompt:

zzle elis eha fol hep spon abia fancy steel mask wbo, art by Myoung Ho Lee, (art by Jacek Yerka:0.7) , landscape of a Maximalist Costa Rica, roots with Herb garden, at Midday, Ultrarealistic, Nostalgic lighting,

Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 3, Seed: 2631022029, Size: 768x768, Model hash: ec41bd2a82, Model: photon_v1, Lora hashes: "add_detail: 7c6bad76eb54, epiNoiseoffset_v2: d1131f7207d6", Version: v1.5.1

https://void.cat/d/6zJseJek9a6dAmmgsBZYUv.webp

It might helpful, or at least interesting, to know how a stable diffusion checkpoint was trained. I grabbed this script from reddit and packaged it up for easier use. It compares two checkpoints and shows the tokens with the largest difference between them. These are likely to be the words used more frequently during fine tuning.

https://github.com/zappityzap/tokenator

Easily pulled this dandelion out of my garden bed. I think my soil is improving.

#soilfoodweb #permies #permaculture #gardenstr

https://void.cat/d/9k158MMUn2Z5F3zhb5BYto.webp

I didn't intend to delete the post of the beautiful garlic, oops. To make up for it, here's a beautiful red dragonfly I met in the garden today. It was so busy chowing down on something it caught that it didn't mind me sticking a camera in its face.

get weird. stay weird. nostr.

https://void.cat/d/3De2ZjkQRDyUpoy5GQRpVE.webp

#stablediffusion

i am become nostr

saviour of worlds

Stable Diffusion for images. Automatic1111 is the most popular, Vladmandic is another version. Olivio Sarikas and Sebastian Kamph on YouTube are good resources.

I experimented with a project called dalai for ChatGPT style text generation but haven't followed that as closely.

I've been excited about the Lynx R-1 mixed reality headset since I first heard about it, and it just keeps getting better. Open source, and the hardware is as open as they can make it. Much lower latency than others, more brightness, raw access to sensors and frames.

They didn't design it to help people with low vision, but check this talk out. Even without specific software this thing is giving people their vision back. Think what it could do with more software features to aid vision.

https://youtu.be/rWzVX11hF_8?t=102

#grownostr #mixedreality #ar # vr #xr