Avatar
John Dee
fe32298e29aab4ec2911c0dbdda485c073f869c5444ee92f7ae247ed20516265

I made a Grafana dashboard to monitor soil moisture throughout my garden:

The measurements are not consistent across sensors, so I've been watching what happens after watering (the spikes) and estimating field capacity for each sensor (the green dashed line).

#grafana #homeassistant #permaculture #grownostr

I did some freehand recordings with my phone through the eyepiece and they were good. The hardest part was holding the phone in the right spot so I bet an a adapter would work well.

Trinocular microscope, various adapters and a DSLR camera with live view and HDMI output to a monitor. I record the video from the monitor screen with my phone. I'll try direct capture when I can get another HDMI cable. Not cheap or simple but the video quality is way better than those overpriced USB microscope cameras.

I found a variety of nematodes in a compost sample last night:

https://nostr.build/av/f24b39a9ffe5609a283838e3bf44746c07db7e99c9fb4787b04691ee41e54c9c.mp4

You can see this one swallowing. I got to see it poop too, but didn't get it on video. Nematode poop is one way that the soil food web cycles nutrients to make them plant available. Nematodes can eat 10,000 bacteria per day. They excrete the excess nutrients from the bacteria in a plant-available form.

https://nostr.build/av/436b65d03ead3ea32963aab224512122374fd9b91179886ce0a4bc7d3602a36b.mp4

https://nostr.build/av/0cf716e57c9f523937414d8c70b22de0f0ed674a4201bdb5451ccbc137da3fab.mp4

Seeing three different kinds of nematodes in a single drop from a compost sample is a good sign. This compost sample had lots of protozoa and fungi too. It was a little low on flagellates, and the organic matter wasn't fully decomposed. Still, I'm very impressed with it for being only four months old and never turned.

#soilfoodweb #compost #permies #permaculture #grownostr

Replying to Avatar Nunya Bidness

A Soil Owner's Manual

https://www.bookpeople.com/book/9781530431267

A Soil Owner's Manual: How to Restore and Maintain Soil Health by Jon Stika

A book that is short but jam-packed with the basics of soil function and structure as well as a plan to ameliorate degraded and destroyed soils. A must-have in your library.

"The problem lies in that most people do not know what a healthy soil looks or acts like, nor what makes it healthy or unhealthy."

"The five main functions of soil are: maintaining biodiversity and productivity, partitioning water and solute flow, filtering and buffering, nutrient cycling, and structural support."

"There are three main classes of soil properties; physical, chemical, and biological."

Some good reviews over at the permies.com wiki: https://permies.com/wiki/118775/Soil-Owner-Manual-Restore-Maintain

It's a bright sunny day, so I took a look at the illuminance numbers on the sensors again.

Lots of variance in the raw numbers. Giving each one a scale factor based on its largest reading seems to calibrate them fairly well. Sensor 2 is less accurate now at lower levels, but these will be outside so it's not too important to get accurate numbers for lower light levels.

#grafana #homeassistant #grownostr

Idiocracy getting closer everyday.

With only a few hours of data, I can see that sensor 2 is definitely giving invalid luminance numbers.

The variance in the other sensors is high, but they follow the same pattern and might be fine with a manual offset. Since it was cloudy today, none of them hit the upper limit of measurement. In previous testing with sensor 1, the limit was 100,000 lux.

Next, I made it rain.

The large spike shows the senors are correctly detecting a change in moisture. Despite being right next to each other, they show very different percentages, as much as 20%. Seeing the moisture level drop off quickly seems like a sign of good drainage, at least down to 3 inches.

It may be worth trying to calibrate the sensors in water. At the very least these give an idea of the relative moisture changing over time.

#homeassistant #grownostr

I'm focused on the data right now, but the cool thing about getting the data into Home Assistant is being able to do stuff like that. Send a notification and open a valve or turn a smart plug on, etc.

Numeric values for each reading. This is the basic Home Assistant card.

Tapping any of those readings will open the basic Home Assistant graph. It can show more history but takes a few extra taps. You can see here the spike in soil moisture after I watered the sensors.

These are branded VegTrug, but I've seen them as Xiaomi Mi Flora, and Flower Care. The model is HHCCJCY01HHCC. They're not going to be a simple turnkey solution though. They're bluetooth with a short range and I'm using ESP32s powered by a powerbank (good for a few days) to get the data back to Home Assistant and Grafana.

Dropping some science on the garden today.

I've been testing a pair of these plant sensors for the last few weeks and decided they were useful enough to get more. During testing I found that sensor 2 showed odd readings for light level. Placed right next to each other, when the light level was below 10k both sensors showed approximately the same value, but above 10k sensor 2 was off by a factor of about 2.75. It still followed the same pattern, so it seems like a calibration error.

I got 6 more sensors and placed them all next to each other. Now I'm giving the sensors a chance to stabilize and accumulate data. We'll see how consistent the readings are before putting them in their final locations.

#permies #permaculture #gardenstr #grownostr

Replying to Avatar John Dee

The AnimatedDiff extension for Automatic1111 needs some work, but the results are promising. See replies for more examples.

https://m.primal.net/HIMn.mp4

(right-click loop and fullscreen for the best effect)

Here's what I learned from testing:

* You need enough VRAM to render all the frames at once, in a single batch.

* 16 frames at 512x512 uses 8.9 GB, 512x768 uses 11.2 GB for me.

* Use the 1.4 motion module. 1.5 almost always has watermarks, and often very little motion.

* Crank the CFG up to 10-25, otherwise the images are faded and lack detail. This seems like a bug in the extension because the original implementation works at normal CFG.

* Going over 75 tokens will change the image halfway through because of the way A1111 handles long prompts, so keep it less than 75.

* Face restore seems less stable between frames, try turning it off.

* Dynamic prompts should be removed, or they'll change for each frame (which might be neat if done intentionally).

* The motion modules were trained on 16 images. You can do more or less but don't expect good results. Using 24 frames often has too much change between frames but can still produce good results. 8 frames is good enough for an image to look "alive" but not long enough to show much motion.

* Remember to edit ui-config.json after running the plugin to set your defaults. Search for "animatediff" and set the motion module and output formats.

AnimateDiff combines well with One Button Prompt, but be careful not to go over 75 tokens. Try turning down the complexity to 3 or 4.

#stablediffusion #aiart #animatediff #grownostr

Replying to Avatar John Dee

The AnimatedDiff extension for Automatic1111 needs some work, but the results are promising. See replies for more examples.

https://m.primal.net/HIMn.mp4

(right-click loop and fullscreen for the best effect)

Here's what I learned from testing:

* You need enough VRAM to render all the frames at once, in a single batch.

* 16 frames at 512x512 uses 8.9 GB, 512x768 uses 11.2 GB for me.

* Use the 1.4 motion module. 1.5 almost always has watermarks, and often very little motion.

* Crank the CFG up to 10-25, otherwise the images are faded and lack detail. This seems like a bug in the extension because the original implementation works at normal CFG.

* Going over 75 tokens will change the image halfway through because of the way A1111 handles long prompts, so keep it less than 75.

* Face restore seems less stable between frames, try turning it off.

* Dynamic prompts should be removed, or they'll change for each frame (which might be neat if done intentionally).

* The motion modules were trained on 16 images. You can do more or less but don't expect good results. Using 24 frames often has too much change between frames but can still produce good results. 8 frames is good enough for an image to look "alive" but not long enough to show much motion.

* Remember to edit ui-config.json after running the plugin to set your defaults. Search for "animatediff" and set the motion module and output formats.

AnimateDiff combines well with One Button Prompt, but be careful not to go over 75 tokens. Try turning down the complexity to 3 or 4.

#stablediffusion #aiart #animatediff #grownostr

Replying to Avatar John Dee

The AnimatedDiff extension for Automatic1111 needs some work, but the results are promising. See replies for more examples.

https://m.primal.net/HIMn.mp4

(right-click loop and fullscreen for the best effect)

Here's what I learned from testing:

* You need enough VRAM to render all the frames at once, in a single batch.

* 16 frames at 512x512 uses 8.9 GB, 512x768 uses 11.2 GB for me.

* Use the 1.4 motion module. 1.5 almost always has watermarks, and often very little motion.

* Crank the CFG up to 10-25, otherwise the images are faded and lack detail. This seems like a bug in the extension because the original implementation works at normal CFG.

* Going over 75 tokens will change the image halfway through because of the way A1111 handles long prompts, so keep it less than 75.

* Face restore seems less stable between frames, try turning it off.

* Dynamic prompts should be removed, or they'll change for each frame (which might be neat if done intentionally).

* The motion modules were trained on 16 images. You can do more or less but don't expect good results. Using 24 frames often has too much change between frames but can still produce good results. 8 frames is good enough for an image to look "alive" but not long enough to show much motion.

* Remember to edit ui-config.json after running the plugin to set your defaults. Search for "animatediff" and set the motion module and output formats.

AnimateDiff combines well with One Button Prompt, but be careful not to go over 75 tokens. Try turning down the complexity to 3 or 4.

#stablediffusion #aiart #animatediff #grownostr

Replying to Avatar John Dee

The AnimatedDiff extension for Automatic1111 needs some work, but the results are promising. See replies for more examples.

https://m.primal.net/HIMn.mp4

(right-click loop and fullscreen for the best effect)

Here's what I learned from testing:

* You need enough VRAM to render all the frames at once, in a single batch.

* 16 frames at 512x512 uses 8.9 GB, 512x768 uses 11.2 GB for me.

* Use the 1.4 motion module. 1.5 almost always has watermarks, and often very little motion.

* Crank the CFG up to 10-25, otherwise the images are faded and lack detail. This seems like a bug in the extension because the original implementation works at normal CFG.

* Going over 75 tokens will change the image halfway through because of the way A1111 handles long prompts, so keep it less than 75.

* Face restore seems less stable between frames, try turning it off.

* Dynamic prompts should be removed, or they'll change for each frame (which might be neat if done intentionally).

* The motion modules were trained on 16 images. You can do more or less but don't expect good results. Using 24 frames often has too much change between frames but can still produce good results. 8 frames is good enough for an image to look "alive" but not long enough to show much motion.

* Remember to edit ui-config.json after running the plugin to set your defaults. Search for "animatediff" and set the motion module and output formats.

AnimateDiff combines well with One Button Prompt, but be careful not to go over 75 tokens. Try turning down the complexity to 3 or 4.

#stablediffusion #aiart #animatediff #grownostr

Replying to Avatar John Dee

The AnimatedDiff extension for Automatic1111 needs some work, but the results are promising. See replies for more examples.

https://m.primal.net/HIMn.mp4

(right-click loop and fullscreen for the best effect)

Here's what I learned from testing:

* You need enough VRAM to render all the frames at once, in a single batch.

* 16 frames at 512x512 uses 8.9 GB, 512x768 uses 11.2 GB for me.

* Use the 1.4 motion module. 1.5 almost always has watermarks, and often very little motion.

* Crank the CFG up to 10-25, otherwise the images are faded and lack detail. This seems like a bug in the extension because the original implementation works at normal CFG.

* Going over 75 tokens will change the image halfway through because of the way A1111 handles long prompts, so keep it less than 75.

* Face restore seems less stable between frames, try turning it off.

* Dynamic prompts should be removed, or they'll change for each frame (which might be neat if done intentionally).

* The motion modules were trained on 16 images. You can do more or less but don't expect good results. Using 24 frames often has too much change between frames but can still produce good results. 8 frames is good enough for an image to look "alive" but not long enough to show much motion.

* Remember to edit ui-config.json after running the plugin to set your defaults. Search for "animatediff" and set the motion module and output formats.

AnimateDiff combines well with One Button Prompt, but be careful not to go over 75 tokens. Try turning down the complexity to 3 or 4.

#stablediffusion #aiart #animatediff #grownostr

The AnimatedDiff extension for Automatic1111 needs some work, but the results are promising. See replies for more examples.

https://m.primal.net/HIMn.mp4

(right-click loop and fullscreen for the best effect)

Here's what I learned from testing:

* You need enough VRAM to render all the frames at once, in a single batch.

* 16 frames at 512x512 uses 8.9 GB, 512x768 uses 11.2 GB for me.

* Use the 1.4 motion module. 1.5 almost always has watermarks, and often very little motion.

* Crank the CFG up to 10-25, otherwise the images are faded and lack detail. This seems like a bug in the extension because the original implementation works at normal CFG.

* Going over 75 tokens will change the image halfway through because of the way A1111 handles long prompts, so keep it less than 75.

* Face restore seems less stable between frames, try turning it off.

* Dynamic prompts should be removed, or they'll change for each frame (which might be neat if done intentionally).

* The motion modules were trained on 16 images. You can do more or less but don't expect good results. Using 24 frames often has too much change between frames but can still produce good results. 8 frames is good enough for an image to look "alive" but not long enough to show much motion.

* Remember to edit ui-config.json after running the plugin to set your defaults. Search for "animatediff" and set the motion module and output formats.

AnimateDiff combines well with One Button Prompt, but be careful not to go over 75 tokens. Try turning down the complexity to 3 or 4.

#stablediffusion #aiart #animatediff #grownostr

First harvest of serviceberries (amelanchier alnifolia) from a tree I planted two years ago.

#permies #permaculture