White noise is more Nostr then Signal
Rewrote my bot to read values from a config file and the result has been great so far. Here is how it looks for me:
[Sampler]
temperature = 0.62
max_length = 14500
max_context_length = 32768
top_p = 0.90
min_p = 0.07
top_k = 25
rep_pen = 1.04
rep_pen_range = 330
rep_pen_slope = 0.6
enable_thinking = False
stop_sequence =
[Settings]
host = http://localhost:5001
display_name=UEAIB
full_name=unofficial experimental AI bot
description=I'm not responsible for alien invasions
welcome_message=This is a test chatbot running on my shitbox. I can see all of your queries.\nUse /reset command to start a new session. Session will automatically reset after 5 minutes without messages.\nDo NOT share private or security-sensitive information!\ntemporary contact: https://smp5.simplex.im/a#-m-PgMG-4tFTrzkwF3BFf5x-rDHbaW7oeFOWNrhLtFs
auto_accept_message=
command_prefix=/
image = data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAA(long base64 image)
[Adapter]
system_start: <|im_start|>system\n
system_end: <|im_end|>\n
user_start: <|im_start|>user\n
user_end: <|im_end|>\n
assistant_start: <|im_start|>assistant\n
assistant_end: <|im_end|>\n
I'm too lazy to write anything else, just beleave me that this is exciting.
Link:
https://smp6.simplex.im/a#cSOiK--O58xQht8Nz5B0-qygNSl-l3IAVN6rJ6qTGeI
#simplex
#SimpleX
#bot
#messenger
#privacy
#nost
#introductions
Profile picture by nostr:nprofile1qqst8av970sr3uwueha094apgjdyrps9cwfv6va60ufhhj3yssh4lyqpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qythwumn8ghj7anfw3hhytnwdaehgu339e3k7mf0qyvhwumn8ghj7un9d3shjtnddakk7um5wgh8q6twdvhs6fr8wp
nostr:nprofile1qqszeurmeujswcnyyrcyszpqtpgm95l2hwkfstgpkf02nwvnr88zm3gppemhxue69uhkummn9ekx7mp0rx576c
Es gibt eine enterprise version von Windows
Made a local #selfhosted voice assistant. Finally got it working good enough to setup in the living room.
https://video.nostr.build/119be297cccdd511d1be5413e1554a76e66de0b8938362c8aad9137a3775e41e.mp4
Cool
oof
Hello I'm owner and I like to say that I have a lot of time to waste even though I don't. I'm currently having a bit of fun with writing chatbots for nostr:nprofile1qqst4agdxhnwgvx83m9hjh9lz08pykgdxawtgjgycmwe667h6hmm4xcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7p8nguq.
I'd really appreciate social interaction. I dont want fame, I just want to discuss.
#introductions
#SimpleX
#privacy
Updated the bot with this code to accept group invites automatically:
@bot.event('receivedGroupInvitation')
async def on_contact_request(chat_info):
chat_id = str(chat_info["groupInfo"]["groupId"])
#print("CHAT_ID: "+chat_id)
await bot.client.api_join_group(chat_id)
#SimpleX #privacy #python #ai
Modified code to self-host the bot mentioned in my profile with koboldcpp:
import asyncio
from simpx import BotProfile, SimpleXBot
from simpx.download import SimpleXDaemon
from simpx.extension import ChatWrapper
from datetime import datetime, timedelta, timezone
import traceback
import os
import time
import logging
chat_histories: dict[str, dict] = {}
try:
from openai import OpenAI
aiclient = OpenAI(
base_url="http://localhost:5001/v1/",
api_key="KOBOLDCPP",
)
except ImportError:
aiclient = None
print("ai client import error")
CHAT_MODEL = "koboldcpp/mlabonne_Qwen3-0.6B-abliterated-Q3_K_L"
# Example usage
if __name__ == "__main__":
daemon = SimpleXDaemon()
daemon.run(port_number=5225)
# Create a bot profile
profile = BotProfile(
display_name="SimpleX Chat AI bot",
full_name="",
description="",
welcome_message="""This is a test chatbot connected to [FluxAI](https://ai.runonflux.com/). You can use it for technical and other questions.
Use /reset command to start a new session. Session will automatically reset after 5 minutes without messages.
Do NOT share private or security-sensitive information!""",
auto_accept_message="",
command_prefix="/",
)
# Create the bot with the profile
bot = SimpleXBot(profile)
@bot.command(name="info", help="Shows bot information")
async def info_command(chat_info, profile):
"""Command that shows information about the bot's profile."""
await bot.send_message(
chat_info,
f"*Bot Information*\n"
f"Name: {profile.display_name}\n"
f"Description: {profile.description}\n"
f"Address: {profile.address}",
)
@bot.command(name="reset", help="Reset the conversation history for this chat")
async def reset_command(chat_info):
chat_type = chat_info.get("type")
if chat_type == "direct":
chat_id = chat_info["contact"]["contactId"]
elif chat_type == "group":
chat_id = chat_info["groupInfo"]["groupId"]
else:
return
chat_histories[chat_id]["messages"] = [] # wipe the list
await bot.send_message(chat_info, "Conversation history has been reset.")
if aiclient is not None:
@bot.on_message()
async def handle_plain_message(text: str, chat_info: dict, chat_item: dict):
"""
Demonstrate live messaging with an AI streaming response from OpenRouter.
Args:
chat_info: Chat information dictionary.
args: User's prompt.
"""
# Create a ChatWrapper instance using the provided chat_info and the bot's client.
chat = ChatWrapper(chat_info, bot.client)
chat_type = chat_info.get("type")
if chat_type == "direct":
chat_id = chat_info["contact"]["contactId"]
elif chat_type == "group":
chat_id = chat_info["groupInfo"]["groupId"]
else:
return
now = datetime.now(timezone.utc)
# Initialize history entry if it doesn't exist
if chat_id not in chat_histories:
chat_histories[chat_id] = {"messages": [], "last_seen": now}
else:
# Check if 5 minutes have passed since the last message
last_seen = chat_histories[chat_id]["last_seen"]
if now - last_seen >= timedelta(minutes=5):
# Clear the conversation history
chat_histories[chat_id]["messages"] = []
# Update last_seen timestamp
chat_histories[chat_id]["last_seen"] = now
# Add the new user message to the history
chat_histories[chat_id]["messages"].append(
{"role": "user", "content": text}
)
try:
# Start a live message session with an initial text.
initial_text = "Thinking..."
live_msg = await bot.send_message(chat, initial_text, live=True, ttl=60)
current_response = ""
# Create the chat completion request with streaming enabled.
response = aiclient.chat.completions.create(
model=CHAT_MODEL,
messages=chat_histories[chat_id]["messages"],
stream=True, # Enable streaming of the response.
)
# Buffer to accumulate tokens until a sentence is complete
buffer = ""
last_update = time.time()
UPDATE_INTERVAL = 2 # seconds
async def flush_buffer(force: bool = False):
"""Push complete sentences to the live message at UPDATE_INTERVAL-second cadence."""
nonlocal buffer, current_response, last_update
now = time.time()
if force or (
buffer.rstrip().endswith((".", "?", "!"))
and now - last_update >= UPDATE_INTERVAL
):
current_response += buffer
await live_msg.update_live(current_response.strip())
buffer = ""
last_update = now
# Process each streaming chunk.
for chunk in response:
# Extract text content from the current chunk.
# Assumes chunk structure similar to OpenAI's streaming response.
chunk_text = chunk.choices[0].delta.content or ""
if "<|eot_id|>" in chunk_text:
chunk_text = chunk_text.replace("<|eot_id|>", "")
if chunk_text: # keep any leftover text
buffer += chunk_text
break # no more data expected
buffer += chunk_text
await flush_buffer()
# Final flush for any remaining text
await flush_buffer(force=True)
chat_histories[chat_id]["messages"].append(
{"role": "assistant", "content": current_response}
)
# Finalize the live message.
await live_msg.finish_live()
except Exception as e:
# In case of error, finalize the live message.
traceback.print_exc()
await live_msg.finish_live()
# Start the bot
try:
asyncio.run(bot.start())
except KeyboardInterrupt:
logging.info("Exiting")
Je découvre #SimpleX via https://privacyspreadsheet.com/messaging-apps
La version desktop https://simplex.chat/downloads/#desktop-app
Et qui semble être la meilleure alternative à #Signal
C'est dommage il manque juste le support de #UnifiedPush https://unifiedpush.org/ pour éviter de laisser l'application en arrière plan ou de checker toutes les 10 mn les nouveaux messages.
L'application povez checker du message continûment aussi.
SimpleX - Chat introduced an experimental LLM-bot which allows users to interect with AI through chat. It is built upon the yet unstable python framework simpx-py. The bot is making use of the OpenAI python module to stream requests using live messages. Self hosting is currently possible with koboldcpp thanks to its built-in OpenAI API emulation. Be aware that the bot can read all of your messages.
https://github.com/shumvgolove/simpx-py/tree/fixes
https://github.com/simplex-chat/simplex-chat/blob/sh/llm-bot/apps/simplex-llm-bot/bot.py
https://github.com/LostRuins/koboldcpp
#SimpleX #privacy
Hello

