11 Jan 2026 - tsp
7 mins
Summation amplifiers are a fundamental building block in analog electronics, allowing multiple voltage signals to be combined into a single output using operational amplifiers and a small number of resistors. They are widely used in applications such as audio mixing, sensor fusion, and analog computation, where linear superposition of signals is required. By choosing appropriate resistor values, summation amplifiers can implement both weighted and unweighted sums and averages with well defined gain.
This article explains the two main summation amplifier configurations - inverting and non-inverting - and derives their behavior step by step using Kirchhoffs laws. It highlights the practical differences between both approaches, discusses common pitfalls such as loading effects and bias currents, and provides guidance on when each topology is appropriate in real-world circuit design.
11 Jan 2026 - tsp
11 mins
In the early days of the web dynamic websites promised millisecond-accurate freshness and the thrill of a page that seemed alive. But the reality has been decades of wasted resources, fragile scalability, and broad attack surfaces. This article argues that the future lies in static-first design: serving prebuilt content, embracing caching, and letting updates propagate on their own timescales rather than rebuilding pages on every request. Static does not mean outdated though. Modern stacks - from Jamstack to serverless and edge functions - combine static delivery with just enough dynamism where it truly matters: forms, search, personalization, or realtime updates via pub/sub. The result is faster, safer, more resilient websites that scale naturally, without carrying the baggage of 1990s design mistakes.
11 Jan 2026 - tsp
16 mins
To illustrate a previous theory blog article, this article walks through real nuclear magnetic resonance measurements performed on an extremely simple, home-built spectrometer based on a permanent magnet and minimal RF electronics. Starting from free induction decay and frequency tuning via beatings, it shows how pulsed NMR experiments actually look in practice, how pulse lengths are calibrated using Rabi oscillations, and why raw decay constants extracted from single transients are often misleading. The focus is not on idealized theory or high precision measurements, but on what one observes when working with a small, manually tuned setup and limited automation - and it should illustrate in an educational way how signals and signal shapes look like.
Building on this, the article explores echo-based techniques such as the Hahn echo and CarrâPurcell pulse sequences to reveal the difference between reversible dephasing and irreversible loss of coherence. It shows how true transverse relaxation times (T2) can be extracted, why multi-pulse sequences yield shorter effective decay times, and how pulse imperfections shape experimental outcomes. The result is a practical, intuition-driven view of spin coherence that connects textbook concepts directly to measured signals and highlights both the power and the limits of simple NMR experiments.
10 Jan 2026 - tsp
11 mins
ChatGPTs support for remote MCP servers finally makes it possible to connect custom tools and private services directly to the web interface. However, while the MCP protocol itself supports simple authentication methods, the ChatGPT web interface currently requires OAuth for non-public connectors - creating a significant barrier for experimentation, prototypes, and single-user setups.
This short tutorial shows a pragmatic workaround: running a remote MCP server with a static shared secret passed via the URL, validated server-side using FastAPI middleware. While explicitly not suitable for production, this approach is ideal for learning MCP mechanics, testing tool design, and exploring integrations without standing up a full OAuth infrastructure.
08 Jan 2026 - tsp
23 mins
Large Language Models are powerful tools for generating ideas, code, and mathematical reasoning, but they are also prone to confidently producing subtle errors. This becomes especially problematic in mathematics and theoretical physics, where correctness is non-negotiable. This article explores a practical and increasingly successful solution: combining LLMs with formal proof assistants such as Coq and Lean. In this workflow, the language model proposes definitions, lemmas, and proof strategies, while the proof assistant rigorously checks every step. The result is a system that preserves the creativity and exploratory power of LLMs while grounding every accepted claim in machine-verified mathematical truth.
The article surveys recent research in LLM-assisted theorem proving, from early neural premise selection and GPT-fs contributions to Metamath, to modern agentic systems that interact step-by-step with proof environments. It then provides a hands-on tutorial showing how to guide Coq with an LLM, both manually and through automated agent loops such as Codex-based workflows. The broader outlook extends beyond pure mathematics, highlighting emerging efforts to formalize parts of theoretical physics. Together, these developments point toward a new research paradigm in which AI systems can explore bold ideas - while formal verification ensures that only rigorously proven results survive. This opens up a future where new theories can be designed and explored way more efficient and faster than before.
05 Jan 2026 - tsp
9 mins
The Lagrangian is one of the most powerful tools in physics, yet for many of us it enters our education as something to be accepted rather than truly understood. We learn to write down L=T-V, apply the EulerâLagrange equations, and move on - often without ever seeing why this particular combination of kinetic and potential energy is singled out in the first place. Only later, when encountering more advanced formalisms such as path integrals, does this question often resurface.
This article takes a step back and reconstructs the Lagrangian again from basic Newtonian ideas. Starting from momentum, conservative forces, and the principle of virtual work, it shows how the structure and the variational statement emerge naturally without being postulated. The result is a perspective in which the action principle and the Lagrangian appears not as a mysterious axiom, but as the most compact and economical way of expressing classical dynamics and the natural bridge to its quantum generalization.
04 Jan 2026 - tsp
13 mins
Modern backend stacks often run everything on one machine behind nginx or Apache - yet we still habitually wire internal services up with TCP ports on 127.0.0.1. This article argues that this is the wrong tool: if a service is strictly local and only meant to be reached through the reverse proxy, a TCP socket is unnecessary exposure and unnecessary complexity.
Unix domain sockets are a cleaner fit for local IPC: access control becomes simple filesystem permissions, accidental exposure to the network disappears by design, and you avoid burning shared TCP resources under load. With a practical Python Uvicorn and Apache example, plus a short historical detour into why JavaEE and Tomcat ended up stuck in the everything is TCP world, this short article is a pragmatic guide for making your internal service layout safer and more explicit.
30 Oct 2025 - tsp
8 mins
Getting structured data out of large language models is one of the key steps when building automation pipelines. While OpenAI, Mistral, and Ollama make this easy through direct response_format options, Anthropics Claude API takes a different route. There is no built-in schema parameter - yet structured, validated JSON output is still possible if one knows how to guide the model correctly.
This article explains how to achieve predictable structured responses from Claude by generating a synthetic tool definition that mirrors the desired schema. The approach is elegant, compatible with other models, and turns any Claude completion into a clean, validated data exchange. A practical example demonstrates how to implement it in Python using Pydantic and how this technique bridges the gap between OpenAIs response_format and Anthropics tool-calling mechanism.
25 Oct 2025 - tsp
25 mins
mini-apigw is a lightweight, locally controlled OpenAI-compatible API gateway that unifies multiple AI backends such as OpenAI, Anthropic, Ollama, vLLM, and Fooocus under one clean endpoint. It restores sanity in a world where every AI service assumes exclusive control of your GPU by adding arbitration, per-app governance, and transparent configuration through simple JSON files - no dashboards, no Kubernetes, no bloat.
Designed for small labs, research environments, and hobby clusters, mini-apigw behaves like an old-school UNIX daemon: minimal, reliable, and easy to reason about. It lets you run modern LLMs and image models side by side with full control over access, security, and resources - all while keeping your infrastructure as lean and transparent as possible and using just a single API.
19 Oct 2025 - tsp
5 mins
When Ollama auto-updated on my small dual RTX 3060 headless setup, model loading suddenly stopped working - no errors, just silence and hanging clients. The GPUs were detected, yet nothing generated and loading of tensors failed from one day to the other. Upgrading drivers and even moving from version 0.12.4 to 0.12.5 did not help, while other CUDA applications ran perfectly.
After a few hours of debugging, the fix turned out to be simple: rolling back to Ollama 0.12.3 instantly restored normal behavior. If you are seeing lines like llama_model_load: vocab only - skipping tensors and key with type not found ... general.alignment, this post walks through what happened and how to get your models running again without tearing your hair out.