Let me expand this a little bit. An idea is something that pops up in your head, sometimes out of the blue, sometimes after long pondering about a subject, sometimes after interacting with sources of information, like reading, talking to peers, etc.

So now you have an idea: "Peer-to-peer cash without central authority", "flying cars", "a smart phone with no buttons", "an electric car", etc.

To make any of those, let's call them high-level ideas, happen, you will need to come up with a gigantic tree of lower-level ideas, starting with the initial level. Let's use electric cars for an example. Here's a small part of the first level of secondary problems, for which you need you need to come up with ideas, to implement your initial idea:

- solve battery technology

- get battery suppliers for mass production

- design electric motor that withstands road vibration etc.

- research lightweight materials

- find ways to optimize car frame to make use of batteries as load bearing parts

- find target addressable market

- get funded

- find engineering skills

- motivate workers

- … … … … … (a few dozen more)

And you also need meta-ideas like project planning, HR rules, what type of company structure would be best, etc.

Now for EACH of the sub-problems, and meta-problems, and the ideas to solve them, you need to solve another few dozen sub-sub-problems, each of which require ideas.

This goes down many, many layers, and you'll end up with requiring probably a few million ideas. In the comparatively *extremely* easy case of a pure software product, each line of code represents an idea, because you need to have an idea how to formulate the code such that it actually does what you want it to do.

Now look what we have:

YOUR GREAT IDEA ABOUT FLYING CARS - 1 idea

The execution of producing all the lower level ideas, and stringing them all together so that a product comes to fruition - 10 million

And that is why your great fabulous idea is worthless. The execution is ~10 million times more work and value

Reply to this note

Please Login to reply.

Discussion

(Damus seems to have a problem editing long posts. It got stuck).

And yes, AI can certainly come up with a lot of the lower-level ideas, so there's hope for anyone with grand ideas who lacks execution skills.

Also: this is in large part why AI is so incredibly valuable.

Have a nice Sunday.

Yep... And I'm not sure really how much AI solves for any of this, especially for "atoms not bits" startups, because robotics that can do even rudimentary 3-5-year old human tasks are still years (decades?) away.

Indeed. Here's the crucial point, from Grok: The Jacobian Matrix:

The matrix operation you're referring to is likely the **computation of the inverse kinematics Jacobian matrix** or the **forward kinematics transformation matrix**, both of which are critical in robotic joint control and can be computationally intensive. Let me clarify the key operation and its characteristics:

### Key Matrix Operation: Jacobian Matrix (and its Inverse)

The **Jacobian matrix** is central to robotic joint control, particularly for inverse kinematics, which maps desired end-effector velocities (or positions) to joint velocities (or angles). Here's a breakdown:

#### Characteristics of the Jacobian Matrix Operation:

1. **Purpose**:

- The Jacobian relates joint velocities (e.g., angular velocities of motors) to the end-effector's linear and angular velocities in Cartesian space.

- For inverse kinematics, the **inverse Jacobian** (or pseudo-inverse for redundant systems) is used to compute joint velocities or positions required to achieve a desired end-effector motion.

2. **Mathematical Form**:

- For a robot with \( n \) joints, the Jacobian is a \( 6 \times n \) matrix (6 rows for 3D linear and angular velocities, \( n \) columns for joint velocities).

- It is derived from the partial derivatives of the forward kinematics equations with respect to joint angles/positions.

- Example: For a manipulator, the forward kinematics gives the end-effector pose \( \mathbf{x} = f(\mathbf{q}) \), where \( \mathbf{q} \) is the joint configuration. The Jacobian is \( J = \frac{\partial f}{\partial \mathbf{q}} \).

3. **Computational Intensity**:

- **Forward Jacobian**: Computing the Jacobian involves calculating partial derivatives of the kinematic equations, which requires trigonometric functions (sines, cosines) and matrix multiplications for each joint. This scales with the number of joints and complexity of the kinematic chain.

- **Inverse Jacobian**: The inverse (or pseudo-inverse for non-square matrices) is significantly more intensive, requiring operations like singular value decomposition (SVD) or matrix factorization (e.g., LU or QR decomposition). For an \( n \)-joint robot, the computational complexity of inversion is approximately \( O(n^3) \) for a square matrix, or higher for pseudo-inverses.

- **Real-time requirement**: Robotic control often requires updating the Jacobian and its inverse at high frequencies (e.g., 100–1000 Hz), making it computationally demanding, especially for robots with many degrees of freedom (DoF).

4. **Challenges**:

- **Singularities**: Near kinematic singularities (e.g., when joints align in certain configurations), the Jacobian becomes ill-conditioned, making inversion unstable or impossible without special handling (e.g., damped least-squares methods).

- **Redundancy**: For robots with more joints than required (redundant manipulators), the pseudo-inverse is used, which adds computational overhead.

- **Numerical Stability**: Repeated matrix operations can introduce numerical errors, requiring robust algorithms.

5. **Applications in Robotics**:

- **Inverse Kinematics**: Solving for joint angles/velocities to achieve a desired end-effector pose or trajectory.

- **Force/Torque Control**: Relating joint torques to end-effector forces via the Jacobian transpose.

- **Motion Planning**: Used in control loops for smooth, precise movements.

#### Alternative Operation: Forward Kinematics Transformation Matrices

While the Jacobian is often the most compute-intensive due to its role in real-time control, **forward kinematics** also involves matrix operations that can be significant:

- **Transformation Matrices**: Each joint’s position and orientation are computed using 4x4 homogeneous transformation matrices, which combine rotation and translation.

- **Characteristics**:

- Involves matrix multiplications for each joint to compute the end-effector pose.

- Requires trigonometric calculations (e.g., for rotation matrices based on joint angles).

- Less intensive than Jacobian inversion but still scales with the number of joints (approximately \( O(n) \) for \( n \) joints).

- Used in every control cycle to map joint states to Cartesian space.

#### Why These Operations Are Compute-Intensive:

- **High DoF**: Robots with many joints (e.g., humanoid robots or multi-arm systems) increase the size of matrices and the number of operations.

- **Real-Time Constraints**: Control loops demand low-latency computations, often on embedded hardware with limited processing power.

- **Nonlinearities**: Kinematic equations involve nonlinear trigonometric functions, and Jacobian inversion requires iterative or complex linear algebra techniques.

### Summary

The **Jacobian matrix** (and its inverse or pseudo-inverse) is the primary compute-intensive operation in robotic joint control, used for inverse kinematics and velocity mapping. It involves:

- A \( 6 \times n \) matrix for \( n \)-joint robots.

- Partial derivatives of kinematic equations.

- Inversion complexity of \( O(n^3) \), plus challenges with singularities and numerical stability.

- High-frequency updates in real-time control.

If you meant a different operation (e.g., dynamics-related, like the mass matrix in the manipulator’s equations of motion), please clarify, and I can dive deeper!