Avatar
Mota
5265add611e48d3a64e0e4691fe8ca654fe1351c802ec3ebadf74c77be179f5e
A programmer

As someone who enjoys contributing to Nostr projects, I recommend that you look into these projects and learn the basics of this new protocol. It offers huge opportunities to shape the future of social media.

I am also working on a new project to customize it for a specific idea, and that has been really enjoyable for me.

Our job is like puzzle-solving, so don't fear AI—it frees our time to tackle better puzzles.

🔄 The Pros and Cons of Using a Synchronized Database

In many modern applications—especially those with mobile or offline capabilities—a synchronized database can be a game-changer.

✅ Benefits of a Sync DB:

. Offline Access: Users can interact with the app and make changes even without internet access. The data syncs once the connection is restored.

. Real-Time Consistency: Data remains consistent across devices and platforms, improving user experience and reliability.

. Improved Collaboration: Multiple users can work on the same data set simultaneously, and sync ensures everyone stays up-to-date.

. Fault Tolerance: Temporary network issues don’t block functionality, reducing app downtime and improving resilience.

⚠️ Disadvantages:

. Conflict Resolution: When two users change the same data offline, resolving conflicts can be complex and may require custom logic.

. Increased Complexity: Implementing and managing a sync mechanism adds layers of complexity to the architecture.

. Data Overhead: Syncing large datasets or handling many frequent updates can lead to performance issues, especially on limited devices.

.Security Risks: Syncing data across multiple endpoints increases the surface area for potential security vulnerabilities.

🗣 What do you think?

🧠 Real Arch users don’t yay.

I needed Elasticsearch 8.12, not that shiny 9.0 junk.

So I did what pros do:

🌀 Cloned the AUR

🕳 Traveled to an old commit

🛠 Ran: aur build

No fluff. No prompts. Just precision.

Now 8.12 hums quietly on my system like it never left.

💬 Still using yay? Cute.

🚀 Elasticsearch vs MongoDB — What’s the Difference? 🤔

Both are powerful NoSQL databases, but they serve very different purposes:

🔍 Elasticsearch

Built for search and analytics

Uses Lucene under the hood

Supports full-text search, autocomplete, filters, and ranking

Ideal for logs, monitoring tools, search bars, etc.

Near real-time performance, great for dashboards (e.g. Kibana)

Query language: DSL (Domain Specific Language)

📦 MongoDB

A general-purpose document database

Stores flexible BSON documents

Great for CRUD operations, apps, and services

Strong aggregation framework

Used in web apps, mobile apps, CMS, and more

Query language: Rich query syntax (find, match, aggregate, etc.)

🧠 Summary:

Use Elasticsearch when you need blazing-fast search and analytics.

Use MongoDB when you need a flexible, scalable backend for storing and retrieving structured data.

I encountered a problem with the AUR package while using aurutils on Arch Linux. I prefer aurutils over other package managers like yay. If you're interested, I can teach you how to use it.

However, I noticed that when the program opens the PKGBUILD for checking, it uses Vim. Eventually, I found out it actually uses Vifm as well, and there is a bug with the latest version of Vifm. They have fixed it, but it's still in beta. I managed to resolve the issue myself with the help of a comment I found.

https://github.com/vifm/vifm/issues/1055#issuecomment-2661021400

Python puzzle! What's the difference between [['$'] * 3 for i in range(3)] and [['$'] * 3] * 3?

🚀 Linux Server Ready for Production!

Just finished setting up a full LAMP server (Linux + Apache + MySQL + PHP) for a client who’s deploying a custom PHP site (no framework, raw code).

🔧 Tasks accomplished:

Installed and secured MariaDB (`mysql_secure_installation`)

Created database, user, and set correct privileges

Verified local and remote MySQL login with secure credentials

Configured server for upcoming PHP + MySQL web application

✅ Ensured server hardware is healthy using tools like `smartctl`, `lm-sensors`, and `lshw`

Prepped system for local + remote automated backups with rsnapshot and rsync

Managed SSH, system users, and fail2ban for better security

💡 Next: Deploy the project code and import the live database!

🖥️ From CPanel to full root control. Let's go! 💪

⏱️ Boost Your Productivity with `timew`

Ever wondered where your time really goes while working on multiple projects? Meet `timew` — a lightweight, powerful, and fully terminal-based time-tracking tool designed for developers, freelancers, and productivity geeks.

✅ Why I use `timew`:

• Seamless CLI experience

• No setup required — just start tracking with `timew start project_name`

• Flexible tagging and annotation system

• Offline-first, with powerful reporting and integrations (like Taskwarrior)

📊 From tracking deep focus sessions to logging client work hours, `timew` helps me stay accountable and analyze my time like a pro — no mouse, no distractions.

💡 Pro Tip: Use `timew summary :week` to instantly see how your time was spent this week.

🔧 Whether you're coding, writing, or managing tasks, `timew` lets you track with intention and reflect with data. Try it out and take back control of your time.

🚀 Debian Server Setup: Real-World Experience

Spent the last few days setting up a Debian-based Linux server for a client — here's a quick recap of the process and key takeaways:

🔹 Chose Debian over Arch for stability and long-term maintainability

🔹 Installed with RAID1 and full disk encryption for reliability

🔹 Fixed networking issues with non-free firmware and manual driver setup

🔹 Hardened the server:

✅ Disabled root SSH login

✅ Installed & configured fail2ban

✅ Cleaned up SSH key permissions

🔹 Set up daily backups:

🔄 Local backup with `rsync` to an external disk (`/mnt/backup`)

☁️ Remote backup using `rsnapshot` with `ssh` + `rsync`

📂 Backups include `/etc`, `/var/www`, MySQL dumps, and project files

🔹 Troubleshot:

❗ Broken FAT16 partitions (recovered/converted to ext4)

⚠️ Rsnapshot misconfig that looked like it was syncing, but wasn’t 😅

🔹 Finished by cleaning up the server with custom Bash scripts, history deduplication, and tested restore paths.

Next step: Deploying the PHP/Nginx project and testing recovery from both backup sources.

🔐 Security, 🔁 Backup, 💻 Performance — all covered. Proud of the result!

🚀 If you're familiar with `pydantic_core`, Python ↔ Rust bindings, or schema generation, and want to contribute—please join the discussion or help debug this with me.

Let’s figure it out together 💡

nostr:nevent1qvzqqqqqqypzq5n94htpreyd8fjwperfrl5v5e20uy63eqpwc046ma6vw7lp0867qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzpmhxue69uhkummnw3ezumt0d5hsqgywz4n9dzjta03ln2w32xges98gweanrnfv42a8fk7uzteah60llymfs0vk

🧮 collections.Counter in #Python makes counting easy!

It's a dict subclass for counting hashable items:

```

from collections import Counter

c = Counter('banana')

# Counter({'a': 3, 'n': 2, 'b': 1})

```

Great for frequency analysis, top elements, and more.

I researched the validate_assignment feature in the pydantic_core project and need to understand how Rust translates to Python. When Rust builds, it creates a folder called python/pydantic_core that includes files like core_schema.py and _pydantic_core.py based on the core of Pydantic.

I noticed that when Python generates a schema, it sets every overridden function as a JSON object like this:

{

'type': 'function-after',

'function': {'type': 'no-info', 'function': function},

...

}

This gets converted to a dictionary. For the _model_wrap_validator, the type is set as type='function-wrap', and the function name is no_info_wrap_validator_function. This helped me understand that Rust flags every function accordingly. We are now getting closer to finishing a real Rust project.

nostr:nevent1qvzqqqqqqypzq5n94htpreyd8fjwperfrl5v5e20uy63eqpwc046ma6vw7lp0867qyg8wumn8ghj7mn0wd68ytnddakj7qg4waehxw309aex2mrp0yhxgctdw4eju6t09uqsuamnwvaz7tmwdaejumr0dshsqgy0lm26rxkketdng5zsnncdrmt7n0ckps5vh0zj6wp0z83lst0wlc6mdq2d

Writing is one of the best ways to clarify your own thinking.

When you explain something in words, your mind is forced to organize, simplify, and truly understand it.

In my opinion, the best way to read and understand an open-source project is to start by solving an actual issue.

It gives your exploration direction and reveals how the system really works under the hood.

How does pydantic_core developer connect rust app to Python?

I'm working on Pydantic now and trying to solve an issue

# pydantic/issues/11823

I have to read and trace most Pydantic code

You can see in the issue's model, validate_assignment=True exists, and because of that, our error occurred, so I have to find out where this _model_wrap_validator is running. I didn't know anything about the basemodel metaclass, so I have to read the first model_construction.ModelMetaclass and i found something filled there named pydantic_validator because I've seen it in basemodel init as validated_self = self.pydantic_validator.validate_python(data, self_instance=self) in the metaclass this variable filled with a function named create_schema_validator as i dig in the project i find out it return SchemaValidator(schema, config) from pydantic_core that write with rust lang i don't wanna deep more but i have to say SchemaValidator create before my class running over pydantic_core i find_out the problem occured when pydantic_core generate as pythonI'm currently working on Pydantic and trying to resolve an issue. I need to read through and trace most of the Pydantic code. In the model for the issue, validate_assignment=True is present, which is causing the error. Therefore, I need to find out where the _model_wrap_validator is being executed.

I was initially unfamiliar with the BaseModel metaclass, so I looked into the model_construction.ModelMetaclass. I discovered a variable named __pydantic_validator__, which I noticed being used in the __init__ method of BaseModel as validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self).

In the metaclass, a variable is initialized with a function called create_schema_validator. As I investigated the project further, I discovered that this function returns SchemaValidator(schema, config) from pydantic_core, which is implemented in Rust. I don’t want to delve too deeply into that at the moment, but I found that the SchemaValidator is created before my class is processed by pydantic_core. I realized that the issue arises when pydantic_core is generated as Python.

Before I dive into pydantic_core, I want to learn more about model_validator. Through my research, I found it in GenerateSchema. As you know, this is where the schema is created from pydantic_core before everything else. I discovered that it has a method called _model_schema, which is only called when our object has a BaseModel type.

Upon further investigation, I came across a method called apply_model_validators, which sets the order of our overridden model validators, but it only creates the schema without executing any validation.

For the model, everything works fine when I create an instance, like this: m = M(x="foo"). The model's validation shows: {'x': 'foo'} . However, when I assign a new value, the problem arises: `m.x = "bar"`—and here it's represented as x='foo' .

Therefore, I examined the `setattr` method in BaseModel. After some digging, I found a lambda function called validate_assignment, which looks like this: lambda model, name, val: model.__pydantic_validator__.validate_assignment(model, name, val). This has led me to believe that I need to check validate_assignment either in pydantic_core or in the GenerateSchema module in Python.

Should I use tag?

Could I post about technical things here?

Have you ever worked with Pydantic?

Should we document all things that we learned?