diff --git a/docs/index.html b/docs/index.html index a9c1a1b0d2f..986ae04af2e 100644 --- a/docs/index.html +++ b/docs/index.html @@ -18,15 +18,16 @@

News

  • - Does current AI represent a dead end? (www.bcs.org) + Fake Nintendo lawyer is scaring YouTubers, and its not clear YouTube can stop it (www.theverge.com)
  • +
  • - Why OpenAI's Structure Must Evolve to Advance Our Mission (openai.com) + Does current AI represent a dead end? (www.bcs.org)
  • @@ -34,9 +35,8 @@

    News

    -
  • - Missiles Are Now the Biggest Killer of Airline Passengers (www.wsj.com) + The new science of controlling lucid dreams (www.scientificamerican.com)
  • @@ -45,16 +45,25 @@

    News

  • - Building AI Products–Part I: Back-End Architecture (philcalcado.com) + I send myself automated emails to practice Dutch (github.com)
  • +
  • + Why OpenAI's Structure Must Evolve to Advance Our Mission (openai.com) +
  • + + + + + +
  • - Implementing SM-2 in Rust (borretti.me) + Missiles Are Now the Biggest Killer of Airline Passengers (www.wsj.com)
  • @@ -63,7 +72,7 @@

    News

  • - Quiver: A Modern Commutative Diagram Editor (github.com) + Implementing SM-2 in Rust (borretti.me)
  • @@ -72,7 +81,7 @@

    News

  • - Bill requiring US agencies to share source code with each other becomes law (fedscoop.com) + Quiver: A Modern Commutative Diagram Editor (github.com)
  • @@ -81,7 +90,7 @@

    News

  • - Thermodynamic model identifies how gold reaches Earth's surface (phys.org) + Bill requiring US agencies to share source code with each other becomes law (fedscoop.com)
  • @@ -179,15 +188,6 @@

    News

    -
  • - The trap of "I am not an extrovert" (orkohunter.net) -
  • - - - - - -
  • LFFS: Simplicity vs Efficiency (bytes.zone)
  • @@ -296,6 +296,15 @@

    News

    +
  • + How to Build an Electrically Heated Table? (solar.lowtechmagazine.com) +
  • + + + + + +
  • Twenty twenty four annual report and twenty twenty five goals (ablwr.github.io)
  • diff --git a/docs/log.txt b/docs/log.txt index cd3e1de4b6c..8dcbca44a66 100644 --- a/docs/log.txt +++ b/docs/log.txt @@ -1,42 +1,40 @@ -2024/12/27 15:16:19 error parsing https://themargins.substack.com/feed.xml: http error: 403 Forbidden -2024/12/27 15:16:19 Fetched posts from https://themargins.substack.com/feed.xml, took 41.655125ms -2024/12/27 15:16:19 error parsing https://mikehudack.substack.com/feed: http error: 403 Forbidden -2024/12/27 15:16:19 Fetched posts from https://mikehudack.substack.com/feed, took 44.797096ms -2024/12/27 15:16:19 error parsing https://highgrowthengineering.substack.com/feed: http error: 403 Forbidden -2024/12/27 15:16:19 Fetched posts from https://highgrowthengineering.substack.com/feed, took 51.229215ms -2024/12/27 15:16:19 Fetched posts from https://macwright.com/rss.xml, took 104.342994ms -2024/12/27 15:16:19 Fetched posts from https://www.benkuhn.net/index.xml, took 123.436521ms -2024/12/27 15:16:19 Fetched posts from https://anewsletter.alisoneroman.com/feed, took 128.91179ms -2024/12/27 15:16:19 Fetched posts from https://scattered-thoughts.net/rss.xml, took 163.677639ms -2024/12/27 15:16:19 Fetched posts from https://www.slowernews.com/rss.xml, took 188.176682ms -2024/12/27 15:16:19 Fetched posts from https://jvns.ca/atom.xml, took 199.672029ms -2024/12/27 15:16:19 Fetched posts from https://twobithistory.org/feed.xml, took 225.057321ms -2024/12/27 15:16:19 Fetched posts from https://danluu.com/atom.xml, took 283.918002ms -2024/12/27 15:16:19 Fetched posts from https://joy.recurse.com/feed.atom, took 352.008052ms -2024/12/27 15:16:20 Fetched posts from https://www.wildlondon.org.uk/blog/all/rss.xml, took 394.856736ms -2024/12/27 15:16:20 Fetched posts from https://routley.io/reserialised/great-expectations/2022-08-24/index.xml, took 433.991487ms -2024/12/27 15:16:20 Fetched posts from https://blog.golang.org/feed.atom?format=xml, took 512.85992ms -2024/12/27 15:16:20 Content still empty after HTML reader: http://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission -2024/12/27 15:16:20 Fetched posts from https://blog.veitheller.de/feed.rss, took 977.631019ms -2024/12/27 15:16:21 Fetched posts from https://commoncog.com/blog/rss/, took 1.398053157s -2024/12/27 15:16:21 Content still empty after HTML reader: https://vrklovespaper.substack.com/p/software-for-stationery-lovers -2024/12/27 15:16:21 Fetched posts from http://tonsky.me/blog/atom.xml, took 1.553777208s -2024/12/27 15:16:21 Get "https://eyeondesign.aiga.org/why-did-so-many-mid-century-designers-make-childrens-books/": tls: failed to verify certificate: x509: certificate signed by unknown authority -2024/12/27 15:16:21 Content still empty after HTML reader: https://todaythings.substack.com/p/to-acquire-a-goshawk-is-a-major-decision -2024/12/27 15:16:22 Content still empty after HTML reader: https://www.cell.com/device/fulltext/S2666-9986(24)00583-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666998624005830%3Fshowall%3Dtrue -2024/12/27 15:16:22 Content still empty after HTML reader: https://ghostty.org/ -2024/12/27 15:16:23 Fetched posts from https://gochugarugirl.com/feed/, took 3.638390233s -2024/12/27 15:16:23 Content still empty after HTML reader: http://tinylogger.com/max/wnTJ9xu3fw5UiXLp -2024/12/27 15:16:23 Fetched posts from https://hnrss.org/frontpage?points=50, took 4.133823719s -2024/12/27 15:16:25 Fetched posts from https://blaggregator.recurse.com/atom.xml?token=4c4c4e40044244aab4a36e681dfb8fb0, took 5.971529281s -2024/12/27 15:16:49 error parsing https://rachelbythebay.com/w/atom.xml: Get "https://rachelbythebay.com/w/atom.xml": dial tcp 216.218.228.215:443: i/o timeout -2024/12/27 15:16:49 Fetched posts from https://rachelbythebay.com/w/atom.xml, took 30.004271564s -2024/12/27 15:16:49 error parsing https://solar.lowtechmagazine.com/feeds/all-en.atom.xml: Get "https://solar.lowtechmagazine.com/feeds/all-en.atom.xml": dial tcp 84.79.2.129:443: i/o timeout -2024/12/27 15:16:49 Fetched posts from https://solar.lowtechmagazine.com/feeds/all-en.atom.xml, took 30.004290772s -2024/12/27 15:16:49 Skipping writing post, no content: http://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission -2024/12/27 15:16:49 Skipping writing post, no content: https://www.cell.com/device/fulltext/S2666-9986(24)00583-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666998624005830%3Fshowall%3Dtrue -2024/12/27 15:16:49 Skipping writing post, no content: https://ghostty.org/ -2024/12/27 15:16:49 Skipping writing post, no content: https://vrklovespaper.substack.com/p/software-for-stationery-lovers -2024/12/27 15:16:49 Skipping writing post, no content: https://todaythings.substack.com/p/to-acquire-a-goshawk-is-a-major-decision -2024/12/27 15:16:49 Skipping writing post, no content: http://tinylogger.com/max/wnTJ9xu3fw5UiXLp -2024/12/27 15:16:49 Templated 36 posts, took 6.653698ms +2024/12/27 16:19:35 error parsing https://themargins.substack.com/feed.xml: http error: 403 Forbidden +2024/12/27 16:19:35 Fetched posts from https://themargins.substack.com/feed.xml, took 92.106944ms +2024/12/27 16:19:35 error parsing https://highgrowthengineering.substack.com/feed: http error: 403 Forbidden +2024/12/27 16:19:35 Fetched posts from https://highgrowthengineering.substack.com/feed, took 92.391108ms +2024/12/27 16:19:35 error parsing https://mikehudack.substack.com/feed: http error: 403 Forbidden +2024/12/27 16:19:35 Fetched posts from https://mikehudack.substack.com/feed, took 105.443556ms +2024/12/27 16:19:35 Fetched posts from https://macwright.com/rss.xml, took 173.356168ms +2024/12/27 16:19:35 Fetched posts from https://www.benkuhn.net/index.xml, took 174.919025ms +2024/12/27 16:19:35 Fetched posts from https://twobithistory.org/feed.xml, took 182.234043ms +2024/12/27 16:19:35 Fetched posts from https://www.slowernews.com/rss.xml, took 224.118799ms +2024/12/27 16:19:35 Fetched posts from https://jvns.ca/atom.xml, took 258.542921ms +2024/12/27 16:19:35 Fetched posts from https://rachelbythebay.com/w/atom.xml, took 268.567774ms +2024/12/27 16:19:35 Fetched posts from https://www.wildlondon.org.uk/blog/all/rss.xml, took 292.292736ms +2024/12/27 16:19:35 Fetched posts from https://routley.io/reserialised/great-expectations/2022-08-24/index.xml, took 322.243712ms +2024/12/27 16:19:35 Fetched posts from https://anewsletter.alisoneroman.com/feed, took 342.673888ms +2024/12/27 16:19:35 Fetched posts from https://scattered-thoughts.net/rss.xml, took 352.513821ms +2024/12/27 16:19:35 Fetched posts from https://danluu.com/atom.xml, took 394.341505ms +2024/12/27 16:19:35 Fetched posts from https://joy.recurse.com/feed.atom, took 448.077652ms +2024/12/27 16:19:35 Fetched posts from https://blog.golang.org/feed.atom?format=xml, took 518.046692ms +2024/12/27 16:19:36 Fetched posts from https://blog.veitheller.de/feed.rss, took 918.897194ms +2024/12/27 16:19:36 Fetched posts from https://solar.lowtechmagazine.com/feeds/all-en.atom.xml, took 1.270626207s +2024/12/27 16:19:36 Fetched posts from http://tonsky.me/blog/atom.xml, took 1.540953436s +2024/12/27 16:19:36 Fetched posts from https://commoncog.com/blog/rss/, took 1.581587984s +2024/12/27 16:19:37 Content still empty after HTML reader: http://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission +2024/12/27 16:19:37 Content still empty after HTML reader: https://vrklovespaper.substack.com/p/software-for-stationery-lovers +2024/12/27 16:19:38 Get "https://eyeondesign.aiga.org/why-did-so-many-mid-century-designers-make-childrens-books/": tls: failed to verify certificate: x509: certificate signed by unknown authority +2024/12/27 16:19:38 Content still empty after HTML reader: https://todaythings.substack.com/p/to-acquire-a-goshawk-is-a-major-decision +2024/12/27 16:19:39 Fetched posts from https://gochugarugirl.com/feed/, took 3.694939226s +2024/12/27 16:19:39 Content still empty after HTML reader: https://www.cell.com/device/fulltext/S2666-9986(24)00583-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666998624005830%3Fshowall%3Dtrue +2024/12/27 16:19:40 Content still empty after HTML reader: https://ghostty.org/ +2024/12/27 16:19:40 Content still empty after HTML reader: http://tinylogger.com/max/wnTJ9xu3fw5UiXLp +2024/12/27 16:19:41 Fetched posts from https://hnrss.org/frontpage?points=50, took 5.814511053s +2024/12/27 16:19:42 Fetched posts from https://blaggregator.recurse.com/atom.xml?token=4c4c4e40044244aab4a36e681dfb8fb0, took 7.601968653s +2024/12/27 16:19:42 Skipping writing post, no content: http://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission +2024/12/27 16:19:42 Skipping writing post, no content: https://www.cell.com/device/fulltext/S2666-9986(24)00583-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666998624005830%3Fshowall%3Dtrue +2024/12/27 16:19:42 Skipping writing post, no content: https://ghostty.org/ +2024/12/27 16:19:42 Skipping writing post, no content: https://vrklovespaper.substack.com/p/software-for-stationery-lovers +2024/12/27 16:19:42 Skipping writing post, no content: https://todaythings.substack.com/p/to-acquire-a-goshawk-is-a-major-decision +2024/12/27 16:19:42 Skipping writing post, no content: http://tinylogger.com/max/wnTJ9xu3fw5UiXLp +2024/12/27 16:19:42 Templated 37 posts, took 5.647657ms diff --git a/docs/posts/bad-research-idea-false-statements-in-e-graphs.html b/docs/posts/bad-research-idea-false-statements-in-e-graphs.html index bdb610dc10a..95063e89ac2 100644 --- a/docs/posts/bad-research-idea-false-statements-in-e-graphs.html +++ b/docs/posts/bad-research-idea-false-statements-in-e-graphs.html @@ -23,7 +23,7 @@

    bad research idea: false statements in e-graphs

    OK after much squinting at the progression of rewrite rules... I think I have found an example of where the logic goes wrong.

    Can you spot the error?

    -Screenshot 2024-12-23 at 10 06 52 PM +Screenshot 2024-12-23 at 10 06 52 PM

    The issue here is that the empty int list TupleInt.EMPTY is unified with TupleInt(0, partial(lambda i, self, j: Int.if_(j == self.length(), i, self[j])), 101, TupleInt.empty) aka TupleInt(0, lambda j: Int.if_(j == 0, 101, TupleInt.EMPTY[j])))

    Now let's say we do a naive index the empty list like TupleInt.EMPTY[0]. We could say this incorrect, or how we can represent it is that it unifies with Int.NEVER. But it can show up in the e-graph, because in if_ conditionals, the false branch can end up doing indexing that is not allowed. So we want it to not mess things up.

    And in this case then, it will evaluate to (lambda j: Int.if_(j == 0, 101, TupleInt.EMPTY[j])))(0) which is Int.if_(0 == 0, 101, TupleInt.EMPTY[0])) which is 101... So then what we get is that 101 is unified with Int.NEVER which... isn't good! Is really bad! Because it means all numbers can be unified together basically, i.e. false is true whatever.

    diff --git a/docs/posts/building-ai-products-part-i-back-end-architecture.html b/docs/posts/building-ai-products-part-i-back-end-architecture.html deleted file mode 100644 index 419d0be9e4a..00000000000 --- a/docs/posts/building-ai-products-part-i-back-end-architecture.html +++ /dev/null @@ -1,266 +0,0 @@ - - - - - - - James Routley | Feed - - - - Back - Original -

    Building AI Products–Part I: Back-End Architecture

    - -
    -
    -
    - - - - -
    -

    In 2023, we launched an AI-powered Chief of Staff for engineering leaders—an assistant that unified information across team tools and tracked critical project developments. Within a year, we attracted 10,000 users, outperforming even deep-pocketed incumbents such as Salesforce and Slack AI. Here is an early demo:

    - - - -

    By May 2024, we realized something interesting: while our AI assistant was gaining traction, there was overwhelming demand for the technology we built to power it. Engineering leaders using the platform were reaching out non-stop to ask not about the tool but how we made our agents work so reliably at scale and be, you know, actually useful. This led us to pivot to Outropy, a developer platform that enables software engineers to build AI products.

    - -

    Building with Generative AI at breakneck pace while the industry was finding its footing taught us invaluable lessons—lessons that now form the core of the Outropy platform. While LinkedIn overflows with thought leaders declaring every new research paper a “game changer,” few explain what the game actually is. This series aims to change that.

    - -

    This three-part series will cover:

    - -
      -
    • How we built the AI agents powering the assistant
    • -
    • How we constructed and operate our inference pipelines
    • -
    • The AI-specific tools and techniques that made it all work
    • -
    - -

    This order is intentional. So much content out there fixates on choosing the best reranker or chasing the latest shiny technology, and few discuss how to build useful AI software. This is a report from the trenches, not the ivory tower.

    - -

    Structuring an AI Application

    - -

    Working with AI presents exciting opportunities and unique frustrations for a team like ours, with decades of experience building applications and infrastructure.

    - -

    AI’s stochastic (probabilistic) nature fundamentally differs from traditional deterministic software development—but that’s only part of the story. With years of experience handling distributed systems and their inherent uncertainties, we’re no strangers to unreliable components.

    - -

    The biggest open questions lie in structuring GenAI systems for long-term evolution and operation, moving beyond the quick-and-dirty prompt chaining that suffices for flashy demos.

    - -

    In my experience, there are two major types of components in a GenAI system:

    - -
      -
    • Inference Pipelines: A deterministic sequence of operations that transforms inputs through one or more AI models to produce a specific output. Think of RAG pipelines generating answers from documents—each step follows a fixed path despite the AI’s probabilistic nature.
    • -
    • Agents: Autonomous software entities that maintain state while orchestrating AI models and tools to accomplish complex tasks. These agents can reason about their progress and adjust their approach across multiple steps, making them suitable for longer-running operations.
    • -
    - -

    Our journey began with a simple Slack bot. This focused approach let us explore GenAI’s possibilities and iterate quickly without getting bogged down in architectural decisions. During this period, we only used distinct inference pipelines and tied their results together manually.

    - -

    This approach served us well until we expanded our integrations and features. As the application grew, our inference pipelines became increasingly complex and brittle, struggling to reconcile data from different sources and formats while maintaining coherent semantics.

    - -

    This complexity drove us to adopt a multi-agentic system.

    - -

    What are agents, really?

    - -

    The industry has poured billions into AI agents, yet most discussions focus narrowly on RPA-style, no-code and low-code automation tools. Yes, frameworks like CrewAI, AutoGen, Microsoft Copilot Studio, and Salesforce’s Agentforce serve an important purpose—they give business users the same power that shell scripts give Linux admins. But just like you wouldn’t build a production system in Bash, these frameworks are just scratching the surface of what agents can be.

    - -

    The broader concept of agents has a rich history in academia and AI research, offering much more interesting possibilities for product development. Still, as a tiny startup on a tight deadline, rather than get lost in theoretical debates, we distilled practical traits that guided our implementation:

    - -
      -
    • Semi-autonomous: Functions independently with minimal supervision, making local decisions within defined boundaries.
    • -
    • Specialized: Masters specific tasks or domains rather than attempting general-purpose intelligence.
    • -
    • Reactive: Responds intelligently to requests and environmental changes, maintaining situational awareness.
    • -
    • Memory-driven: Maintains and leverages both immediate context and historical information to inform decisions.
    • -
    • Decision-making: Analyzes situations, evaluates options, and executes actions aligned with objectives.
    • -
    • Tool-using: Effectively employs various tools, systems, and APIs to accomplish tasks.
    • -
    • Goal-oriented: Adapts behavior and strategies to achieve defined objectives while maintaining focus.
    • -
    - -

    While these intelligent components are powerful, we quickly learned that not everything needs to be an agent. Could we have built our Slackbot and productivity tool connectors using agents? Sure, but the traditional design patterns worked perfectly well, and our limited resources were better spent elsewhere. The same logic applied to standard business operations—user management, billing, permissions, and other commodity functions worked better with conventional architectures.

    - -

    This meant that we had the following layered architecture inside our application:

    - -

    - -

    Agents are not Microservices

    - -

    I’ve spent the last decade deep in microservices—from pioneering work at ThoughtWorks to helping underdogs like SoundCloud, DigitalOcean, SeatGeek, and Meetup punch above their weight. So naturally, that’s where we started with our agent architecture.

    - -

    Initially, we implemented agents as a service layer with traditional request/response cycles:

    - -

    - -

    One of the biggest appeals of this architecture was that, even if we expected our application to be a monolith for a long time, it creates an easier path to extracting services as needed and benefit from horizontal scalability when the time comes.

    - -

    Unfortunately, the more we went down the path, the more obvious it became that stateless microservices and AI agents just don’t play nice together. Microservices are all about splitting a particular feature into small units of work that need minimal context to perform the task at hand. The same traits that make agents powerful create a significant impedance mismatch with these expectations:

    - -
      -
    • Stateful Operation: Agents must maintain rich context across interactions, including conversation history and planning states. This fundamentally conflicts with microservices’ stateless nature and complicates scaling and failover.
    • -
    • Non-deterministic Behavior: Unlike traditional services, agents are basically state machines with unbounded states. They behave completely differently depending on context and various probabilistic responses. This breaks core assumptions about caching, testing, and debugging.
    • -
    • Data-Intensive with Poor Locality: Agents process massive amounts of data through language models and embeddings, with poor data locality. This contradicts microservices’ efficiency principle.
    • -
    • Unreliable External Dependencies: Heavy reliance on external APIs such as LLMs, embedding services, and tool endpoints creates complex dependency chains with unpredictable latency, reliability, and costs.
    • -
    • Implementation Complexity: The combination of prompt engineering, planning algorithms, and tool integrations creates debugging challenges that compound with distribution.
    • -
    - -

    Not only did this impedance mismatch cause a lot of pain while writing and maintaining the code, but agentic systems are so far away from the ubiquitous 12-factor model that attempting to leverage existing microservice tooling became an exercise in fitting square pegs into round holes.

    - -

    Agents are more like objects

    - -

    If microservices weren’t the right fit, another classic software engineering paradigm offered a more natural abstraction for agents: object-oriented programming.

    - -

    Agents naturally align with OOP principles: they maintain encapsulated state (their memory), expose methods (their tools and decision-making capabilities via inference pipelines), and communicate through message passing. This mirrors Alan Kay’s original vision:

    - -
    -

    OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

    -
    - -

    - -

    We’ve been in the industry long enough to remember the nightmares of distributed objects and the fever dreams of CORBA and J2EE. Yet, objects offered us a pragmatic way to quickly iterate on our product and defer the scalability question until we actually need to solve that.

    - -

    We evolved our agents from stateless Services to Entities, giving them distinct identities and lifecycles. This meant each user or organization maintained their own persistent agent instances, managed through Repositories in our database.

    - -

    This drastically simplified our function signatures by eliminating the need to pass extensive context as arguments on every agent call. It also lets us leverage battle-tested tools like SQLAlchemy and Pydantic to build our agents, while enabling unit tests with stubs/mocks instead of complicated integration tests.

    - -

    Implementing Agentic Memory

    - -

    Agents’ memories can be as simple as a single value to as complicated as keeping track of historical information since the beginning of times. In our assistant, we have both types and more.

    - -

    For simple, narrow-focused agents such as the “Today’s Priorities” agents had to remember nothing more than a list of high-priority things they were monitoring and eventually taking action, such as sending a notification if they weren’t happy with the progress. Others, like our “Org Chart Keeper” had to keep track of all interactions between everyone in the organizations and use that to infer reporting lines and teams people belonged to.

    - -

    The agents with simpler persistence needs would usually just store their data on a dedicated table using SQLAlchemy’s ORM. This obviously wasn’t an option for the more complicated memory needs, so we had to apply a different model

    - -

    After some experimentation, we adopted CQRS with Event Sourcing. In essence, every state change—whether creating a meeting or updating team members—was represented as a Command, a discrete event recorded chronologically—much like a database transaction log. The current state of any object could then be reconstructed by replaying all its associated events in sequence.

    - -

    While this approach has clear benefits, replaying events solely to respond to a query is slow and cumbersome, especially when most queries focus on the current state rather than historical data. To address, CQRS suggests that we maintain a continuously updated, query-optimized representation of the data, similar to materialized views in a relational database. This ensured quick reads without sacrificing the advantages of event sourcing. We started off storing events and query models in Postgres, planning to move them to DynamoDB when we started having issues.

    - -

    One big challenge in this model is that only an agent knows what matters to them. For example, if a user would change cancel a scheduled meeting, which agents should care about this event? The scheduling agent for sure, but if this meeting was about a specific project you might also want the project management agent to know about it as it might impact the roadmap.

    - -

    Rather than building an all-knowing router to dispatch events to the right agents—risking the creation of a God object—we took inspiration from my experience at SoundCloud. There, we developed a semantic event bus enabling interested parties to publish and observe events for relevant entities:

    - -
    -

    Soon enough, we realized that there was a big problem with this model; as our microservices needed to react to user activity. The push-notifications system, for example, needed to know whenever a track had received a new comment so that it could inform the artist about it. […] over several iterations we developed a model called Semantic Events, where changes in the domain objects result in a message being dispatched to a broker and consumed by whichever microservice finds the message interesting.

    -
    - -

    - -

    Following this model, all state-change events were posted to an event bus that agents could subscribe to. Each agent filtered out irrelevant events independently, removing the need for external systems to know what they cared about. Since we were working within a single monolith at the time, we implemented a straightforward Observer pattern using SQLAlchemy’s native event system, with plans to eventually migrate to DynamoDB Streams.

    - -

    Inside our monolith, the architecture looked like this:

    - -

    - -

    Managing both the ORM approach for simpler objects and CQRS for more complex needs grew increasingly cumbersome. Small refactorings or shared logic across all agents became harder than necessary. Ultimately, we decided the simplicity of ORM wasn’t worth the complexity of handling two separate persistence models. We converted all agents to the CQRS style but retained ORM for non-agentic components.

    - -

    Handling Events in Natural Language

    - -

    CQRS and its supporting tools excel with well-defined data structures. At SoundCloud, events like UploadTrack or CreateTrackComment were straightforward and unambiguous. AI systems, however, present a very different challenge.

    - -

    Most AI systems deal with the uncertainty of natural language. This makes the process of consolidating the Commands into a “materialized view” hard. For example, what events correspond to someone posting a Slack message like “I am feeling sick and can’t come to the office tomorrow, can we reschedule the project meeting?”

    - -

    We started with the naive approach most agentic systems use: running every message through an inference pipeline to extract context, make decisions, and take actions via tool calling. This approach faced two problems: first, reliably doing all this work in a single pipeline is hard even with frontier models—more on this in part II. Second, we ran into the God object problem discussed earlier—our logic was spread across many agents, and no single pipeline could handle everything.

    - -

    One option involved sending each piece of content—Slack messages, GitHub reviews, Google Doc comments, emails, calendar event descriptions…—to every agent for processing. While this was straightforward to implement via our event bus, each agent would need to run its inference pipeline for every piece of content. This would offer all sorts of performance and cost issues due to frequent calls to LLMs and other models, especially considering that the vast majority of content wouldn’t be relevant to a particular agent.

    - -

    We wrestled with this problem for a while, exploring some initially promising but ultimately unsuccessful attempts at Feature Extraction using simpler ML models instead of LLMs. That said, I believe this approach can work well in constrained domains—indeed, we use it in Outropy to route requests within the platform.

    - -

    Our solution built on Tong Chen’s Proposition-Based Retrieval research. We already used this approach to ingest structured content like CSV files, where instead of directly embedding it into a vector database, we first use an LLM to generate natural language factoids about the content. While these factoids add no new information, their natural language format makes vector similarity search much more effective than the original spreadsheet-like structure.

    - -

    Our solution was to use an LLM to generate propositions for every message, structured according to a format inspired by Abstract Meaning Representation, a technique from natural language processing.

    - -

    This way, if user Bob sends a message like “I am feeling sick and can’t come to the office tomorrow, can we reschedule the project meeting?” on the #project-lavender channel we would get structured propositions such as:

    - -

    - -

    Naturally, we had to carefully batch messages and discussions to minimize costs and latency. This necessity became a major driver behind developing Outropy’s automated pipeline optimization using Reinforcement Learning.

    - -

    Scaling to 10,000 Users

    - -

    As mentioned a few times, Throughout this whole process, it was very important to us to minimize the amount of time and energy invested in technical topics unrelated to learning about our users and how to use AI to build products.

    - -

    We kept our assistant as a single component, with a single code base and a single container image that we deployed using AWS Elastic Container Service. Our agents were simple Python classes using SQLAlchemy and Pydantic, and we relied on FastAPI and asyncio’s excellent features to handle the load. Keeping things simple allowed us to make massive progress on the product side, to a point we went from 8 to 2,000 users in about two months.

    - -

    - -

    That’s when things started breaking down. Our personal daily briefings—our flagship feature—went from taking minutes to hours per user. We’d trained our assistant to learn each user’s login time and generate reports an hour before, ensuring fresh updates. But as we scaled, we had to abandon this personalization and batch process everything at midnight, hoping reports would be ready when users logged in.

    - -

    As an early startup, growth had to continue, so we needed a quick solution. We implemented organization-based sharding with a simple configuration file: smaller organizations shared a container pool, while those with thousands of users got dedicated resources. This isolation allowed us to keep scaling while maintaining performance across our user base.

    - -

    - -

    This simple change gave us breathing room by preventing larger accounts from blocking smaller ones. We also added priority processing, deprioritizing inactive users and those we learned were away from work.

    - -

    While sharding gave us parallelism, we quickly hit the fundamental scaling challenges of GenAI systems. Traditional microservices can scale horizontally because their external API calls are mostly for data operations. But in AI systems, these slow and unpredictable third-party API calls are your critical path. They make the core decisions, and this means everything is blocked until you get a response.

    - -

    Python’s async features proved invaluable here. We restructured our agent-model interactions using Chain of Responsibility, which let us properly separate CPU-bound and IO-bound work. Combined with some classic systems tuning—increasing container memory and ulimit for more open sockets—we saw our request backlog start to plummet.

    - -

    OpenAI rate limits became our next bottleneck. We responded with a token budgeting system that applied backpressure while hardening our LLM calls with exponential backoffs, caching, and fallbacks. Moving the heaviest processing to off-peak hours gave us extra breathing room.

    - -

    Our final optimization on the architectural: moving from OpenAI’s APIs to Azure’s GPT deployments. The key advantage was Azure’s per-deployment quotas, unlike OpenAI’s organization-wide limits. This let us scale by load-balancing across multiple deployments. To manage the shared quota, we extracted our GPT calling code into a dedicated service rather than adding distributed locks

    - -

    - -

    The Zero-one-infinity rule

    - -

    One of my favorite adages in computer science is “There are only three numbers: zero, one, and infinity.” In software engineering, this manifests as having either zero modules, a monolith, or an arbitrary and always-growing number. As such, extracting the GPTProxy as our first remote service paved the way for similar changes.

    - -

    The most obvious opportunity to simplify our monolith and squeeze more performance from the system was extracting the logic that pulled data from our users’ connected productivity tools. The extraction was straightforward, except for one challenge: our event bus needed to work across services. We kept using SQLAlchemy’s event system, but replaced our simple observer loop with a proper pub/sub implementation using Postgres as a queue.

    - -

    - -

    This change dramatically simplified things—we should have done it from the start. It isolated a whole class of errors to a single service, making debugging easier, and let developers run only the components they were working on.

    - -

    Encouraged by this success, we took the next logical step: extracting our agents and inference pipelines into their own component.

    - -

    - -

    This is where my familiar service extraction playbook stopped working. I’ll cover the details of our inference pipelines in the next article, but first, let’s talk about how we distributed our agents.

    - -

    Agents as Distributed Objects

    - -

    As successful as we were with modeling agents as objects, we’d always been wary of distributing them. My ex-colleague Martin Fowler’s First Law of Distributed Objects puts it best: don’t.

    - -

    Still, I think that Martin’s “exception” for microservices applies just as well for agents:

    - -
    -

    [My objection is that] although you can encapsulate many things behind object boundaries, you can’t encapsulate the remote/in-process distinction. An in-process function call is fast and always succeeds […] Remote calls, however, are orders of magnitude slower, and there’s always a chance that the call will fail due to a failure in the remote process or the connection.

    -
    - -

    The problem with the distributed objects craze of the 90s was its promise that fine-grained operations—like iterating through a list of user objects and setting is_enabled to false—could work transparently across processes or servers. Microservices and agents avoid this trap by exposing coarse-grained APIs specifically designed for remote calls and error scenarios.

    - -

    We kept modeling our agents as objects even as we distributed them, just using Data Transfer Objects for their APIs instead of domain model objects. This worked well since not everything needs to be an object. Inference pipelines, for instance, are a poor candidate for object orientation and benefit from different abstractions.

    - -

    At this stage, our system consisted of multiple instances of a few docker images on ECS. Each container exposed FastAPI HTTP endpoints, with some continuously polling our event bus.

    - -

    This model broke down when we added backpressure and resilience patterns to our agents. We faced new challenges: what happens when the third of five LLM calls fails during an agent’s decision process? Should we retry everything? Save partial results and retry just the failed call? When do we give up and error out?”

    - -

    Rather than build a custom orchestrator from scratch, we started exploring existing solutions to this problem.

    - -

    We first looked at ETL tools like Apache Airflow. While great for data engineering, Airflow’s focus on stateless, scheduled tasks wasn’t a good fit for our agents’ stateful, event-driven operations.

    - -

    Being in the AWS ecosystem, we looked at Lambda and other serverless options. But while serverless has evolved significantly, it’s still optimized for stateless, short-lived tasks—the opposite of what our agents need.

    - -

    I’d heard great things about Temporal from my previous teams at DigitalOcean. It’s built for long-running, stateful workflows, offering the durability and resilience we needed out of the box. The multi-language support was a bonus, as we didn’t want to be locked into Python for every component.

    - -

    After a quick experiment, we were sold. We migrated our agents to run all their computations through Temporal workflows.

    - -

    Temporal’s core abstractions mapped perfectly to our object-oriented agents. It splits work between side-effect-free workflows and flexible activities. We implemented our agents’ main logic as Workflows, while tool and API interactions—like AI model calls—became Activities. This structure let Temporal’s runtime handle retries, durability, and scalability automatically.

    - -

    The framework wasn’t perfect though. Temporal’s Python SDK felt like a second-class citizen—even using standard libraries like Pydantic was a challenge, as the framework favors data classes. We had to build quite a few converters and exception wrappers, but ultimately got everything working smoothly.

    - -

    Temporal Cloud was so affordable we never considered self-hosting. It just works—no complaints. For local development and builds, we use their Docker image, which is equally reliable. We were so impressed that Temporal became core to both our inference pipelines and Outropy’s evolution into a developer platform!

    - -

    Stay tuned for a deeper dive into Temporal and inference pipelines in the next installment of this series!

    - -
    - -
    - -
    -
    - - diff --git a/docs/posts/does-current-ai-represent-a-dead-end.html b/docs/posts/does-current-ai-represent-a-dead-end.html index fde1de6d350..7e4b6655a7d 100644 --- a/docs/posts/does-current-ai-represent-a-dead-end.html +++ b/docs/posts/does-current-ai-represent-a-dead-end.html @@ -44,12 +44,12 @@

    What can I do to resolve this?

    - Cloudflare Ray ID: 8f8a456a6a4beb2c + Cloudflare Ray ID: 8f8aa21829149e64 Your IP: - 20.43.247.165 + 20.172.29.19 Performance & security by Cloudflare diff --git a/docs/posts/fake-nintendo-lawyer-is-scaring-youtubers-and-its-not-clear-youtube-can-stop-it.html b/docs/posts/fake-nintendo-lawyer-is-scaring-youtubers-and-its-not-clear-youtube-can-stop-it.html new file mode 100644 index 00000000000..e3c70290deb --- /dev/null +++ b/docs/posts/fake-nintendo-lawyer-is-scaring-youtubers-and-its-not-clear-youtube-can-stop-it.html @@ -0,0 +1,22 @@ + + + + + + + James Routley | Feed + + + + Back + Original +

    Fake Nintendo lawyer is scaring YouTubers, and its not clear YouTube can stop it

    + +

    In late September, Dominik “Domtendo” Neumayer received a troubling email. He had just featured The Legend of Zelda: Echoes of Wisdom in a series of videos on his YouTube channel. Now, those videos were gone. 

    “Some of your videos have been removed,” YouTube explained matter-of-factly. The email said that Domtendo had now received a pair of copyright strikes. He was now just one copyright strike away from losing his 17-year-old channel and the over 1.5 million subscribers he’d built up. 

    At least, he would have been, if Domtendo hadn’t spotted something fishy about the takedown notice — something YouTube had missed. 

    Domtendo had been a little bit confused right from the start; the strikes didn’t make sense. Like countless other creators, Domtendo specializes in “Let’s Play” videos, a well-established genre where streamers play through the entirety of a game on camera.

    “The next copyright strike will close your channel”

    Nintendo has a complicated relationship with the fans who use its copyrighted works, infamously shutting down all sorts of unauthorized projects by sending cease-and-desists. It has gone after YouTubers, too. But both the Japanese gaming giant and the broader gaming industry typically leave Let’s Plays alone, because they serve as free marketing for their games.

    And yet, YouTube had received a legit-looking request apparently justifying these takedowns under the Digital Millennium Copyright Act (DMCA), signed “Tatsumi Masaaki, Nintendo Legal Department, Nintendo of America.”

    It was in a second email from YouTube that Domtendo spotted something off. The takedown requests came from a personal account at an encrypted email service: “tatsumi-masaaki@protonmail.com”.

    YouTube took action on Domtendo’s videos, even though the requests cited a personal email address.

    YouTube took action on Domtendo’s videos, even though the requests cited a personal email address.

    Image: Domtendo

    Fake takedowns are real. YouTube says over six percent of takedown requests through its public webform are likely fake, and the company accepts requests via plain email too, meaning anyone can file them. Fighting fake takedowns can cost creators time, money, and stress. But creators can’t easily be sure that a takedown is fake — and they can lose their entire channel if they get it wrong and clash with a company that has a legitimate copyright claim.

    When the well-respected Retro Game Corps received his second Nintendo copyright strike, he publicly declared he would self-censor all his future work to hopefully escape the company’s wrath. But first, he checked that Nintendo’s threat was real. He checked to see who YouTube listed as the complaining party, and that it came from a Nintendo email address. Then he checked with his YouTube Partner Manager to be extra safe.

    Rumors of fake Nintendo takedowns have swirled in the past. Earlier this year, Garry’s Mod developer Garry Newman removed 20 years’ worth of Nintendo-related fan content from his sandbox video game over takedown threats. Fans speculated that it may actually have been someone posing as a Nintendo lawyer. But Newman eventually revealed Nintendo was legitimately behind those takedowns despite using seemingly suspicious names and emails.

    Domtendo thought he might have an actual case of a Nintendo faker. So he decided to push back. At first, it seemed to work. He emailed YouTube, and it soon reinstated his videos. But Tatsumi was back the next day — this time, emailing Domtendo directly. 

    “Dear Domtendo, I represent Nintendo of America Inc. (“Nintendo”) in intellectual property matters,” the first email began. After a bunch of legalese, Tatsumi eventually explains why he’s reaching out: “I submitted a notice through YouTube’s legal copyright system, but the infringing content still appears.”

    He wouldn’t let it go. 

    Domtendo wasn’t about to risk his livelihood just in case Tatsumi was real. He got spooked, and began voluntarily pulling his videos off YouTube. But his new pen pal just kept asking for more removals. Tatsumi reached out day after day, sometimes multiple times a day, according to emails shared with The Verge. The threats got weirder, too:

    October 3rd:

    I ask for your expeditious removal of all infringing material that use Nintendo Switch game emulators by 6th October 2024. Please note that the amount of videos infringing Nintendo’s copyrights is too high to be able to list them all in this e-mail and we hope that you will conscientiously remove all infringing videos before the next week. 

    October 8th: 

    Nintendo hereby prohibits you from any future use of its intellectual and copyrighted property. Existing content may remain as long as there is no request to remove it. Nintendo of America Inc. would like to avoid further legal action and therefore hopes that their intellectual property will no longer be used by you. This cease-and-desist declaration is valid immediately and has been approved by President of Nintendo of America Doug Spencer Bowser.

    October 12th: 

    Nintendo of America Inc. (“Nintendo”) will no longer tolerate this behavior and is now on the verge of filing a lawsuit. Note that we work closely with our subsidiary Nintendo of Europe, located in Germany and therefore already have your address from the time you have been Nintendo Partner and/or will receive your new address from the residents’ registration office.”

    Domtendo began reaching out to friends and fellow content creators, and discovered he wasn’t alone. Waikuteru, a streamer who develops Zelda mods, had been targeted by Tatsumi as well. Only that time, the takedown notices were filed in Japanese, and YouTube claimed they’d come from a seemingly real email address: anti-piracy3@nintendo.co.jp. Whoever submitted those notices claimed to be a “Group Sub-Manager” in Nintendo’s “Intellectual Property Department.” 

    Could Tatsumi be legit? Was Domtendo staring down a real threat? 

    The Verge could find no public record of a Tatsumi Masaaki working for Nintendo of America or Nintendo’s legal team, period. Nintendo did not respond to The Verge’s repeated requests to fact-check whether such a lawyer even exists. 

    But there was a person by a similar name working on Nintendo technology patents in its home country of Japan, public records show, and Domtendo was dismayed to find a Nintendo email address for that person on the public web. 

    This is the name of a real person (真章 辰己) who worked for Nintendo out of Kyoto, Japan, but the company wouldn’t tell The Verge one word about it.

    This is the name of a real person (真章 辰己) who worked for Nintendo out of Kyoto, Japan, but the company wouldn’t tell The Verge one word about it.

    Image: USPTO

    To a trained eye, there were signs that Domtendo’s “Tatsumi” was probably a fake. What business would a Japanese game technology inventor have individually chasing down a German YouTuber and threatening them with the laws of the United States? If they were a real lawyer, wouldn’t they know that threatening Domtendo with DMCA 512 is laughable, because that’s the portion of the law that protects platforms like YouTube rather than individual creators? 

    But Domtendo didn’t want to take the risk, not without proof. His livelihood was at stake. So as Tatsumi’s email threats rolled in, he reached out to Nintendo himself. 

    To his great surprise, Nintendo replied. 

    “Please note that tatsumi-masaaki@protonmail.com is not a legitimate Nintendo email address and the details contained within the communication do not align with Nintendo of America Inc.’s enforcement practices. We are investigating further,” the company’s legal department wrote on October 10th, according to a screenshot shared with The Verge.

    Here’s how Nintendo replied to Domtendo.

    Here’s how Nintendo replied to Domtendo.

    Even then, Domtendo didn’t feel safe. He’d seen how Waikuteru had received a legal threat that seemingly came from a legitimate Nintendo email. Perhaps Tatsumi just wasn’t using his proper email account? Domtendo tried emailing “tasumi_masaaki@nintendo.co.jp” to find out. 

    His anxiety ratcheted even higher when Tatsumi’s next email arrived, asking him not to send email to that address. “Please understand that matters are not currently handled from there,” he wrote. Even though it seemed impossible that Tatsumi could be real, he somehow knew things that he shouldn’t.

    Then, on October 18th, Tatsumi suddenly changed his tune: “Dear Domtendo, I hereby retract all of my preceding claims.”

    The end?

    The end?

    Tatsumi wasn’t done with Domtendo quite yet. Two more emails arrived the same day, explaining that while Nintendo had “suspended” him from filing copyright infringement claims, his Nintendo colleagues would now file them on his behalf. Hours later, Domtendo received what was in some ways the most legit-looking email yet, seemingly sent from anti-piracy3@nintendo.co.jp rather than a personal email address. 

    But that email turned out to be Tatsumi’s undoing, when Domtendo checked the headers and discovered they’d spoofed Nintendo’s email address using a publicly available tool on the web. I took the tool for a spin, and sure enough — unless you check, anyone can make an email look like it was sent from Nintendo that way. 

    Domtendo still doesn’t understand how “Tatsumi” knew he’d emailed the real Tatsumi at Nintendo. He changed his passwords and reformatted his computer, just to be safe. Today, his best guess is that the troll was lurking in his personal Discord channel. 

    He’s angry at YouTube for letting this happen. “It’s their fault,” he tells me. “Every idiot can strike every YouTuber and there is nearly no problem to do so. It’s insane,” he writes. “It has to change NOW.”

    “Every idiot can strike every YouTuber”

    It’s true there isn’t a terribly high bar to submit a YouTube copyright claim, something that YouTube itself admits. Currently, bad actors just need to fill in a form on a website, a place where YouTube sees a “10 times higher attempted abuse rate” than tools with more limited access. Or they can just email YouTube’s copyright department directly. And while the law technically requires a copyright holder to provide their name and address and state “under penalty of perjury” that they’re authorized to complain on Nintendo’s behalf, there’s nothing compelling YouTube to check they aren’t lying before slapping creators with penalties. 

    Photo collage of Blue shells from Mario Kart launching at an anonymous figure.

    Russ Crandall, aka Retro Game Corps, was actually targeted by Nintendo. Here’s the story of how the Japanese game company (and YouTube) are threatening his livelihood.

    If you have first-hand information about Nintendo, you can reach me securely on Signal at seanhollister.01.

    The thing to remember is the DMCA’s “Safe Harbor” isn’t here to protect creators, EFF legal director Corynne McSherry explained to me in 2022. When rightsholders realized they wouldn’t be able to sue every uploader, and internet platforms realized they wouldn’t be able to survive under an onslaught of uploader lawsuits, the law became a compromise to protect platforms from liability as long as they remove infringing content fast.

    “It creates a situation where service providers have very strong incentives to respond,” said McSherry. “They don’t want to mess around and try to figure out if they might be liable or not.”

    Waikuteru and Rimea, a pair of other creators harassed by Tatsumi, agree that the YouTube system is unfair. Neither know for sure whether trolls were responsible for all the takedown notices they’re received, and that’s part of the problem. “The idea that months of worries were caused by a single troll as opposed to a big untouchable company is a hard pill to swallow either way,” says Rimea. 

    But they also claim YouTube doesn’t allow smaller channels to challenge copyright strikes in the first place, arguing that it automatically and arbitrarily rejects the legal notices that would let them reinstate their videos. “YouTube decides whether someone loses his channel based on channel size,” says Waikuteru. 

    YouTube isn’t particularly interested in talking about any of this, though. 

    While YouTube spokesperson Jack Malon did confirm that “Tatsumi” made false claims, the company wouldn’t explain why the company even briefly accepted false claims from a protonmail.com email address as legitimate, and repeatedly dodged questions about whether Tatsumi made false claims on other creators’ videos, too.

    YouTube wouldn’t even tell me whether Domtendo was still in danger of false copyright claims from this specific individual, or offer assurances that it would take any new action to prevent this sort of behavior in the future. 

    Malon does claim that YouTube has “dedicated teams working to detect and prevent abuse,” however, and “work to ensure that any associated strikes are reversed” when bad actors make false claims. 

    As for the troll, Tatsumi declined The Verge’s interview request. “Dear Sean, I am an authorized agent for Nintendo of America Inc,” they replied, staying in character to the very end. 

    + + diff --git a/docs/posts/how-to-build-an-electrically-heated-table.html b/docs/posts/how-to-build-an-electrically-heated-table.html new file mode 100644 index 00000000000..edee60192c1 --- /dev/null +++ b/docs/posts/how-to-build-an-electrically-heated-table.html @@ -0,0 +1,466 @@ + + + + + + + James Routley | Feed + + + + Back + Original +

    How to Build an Electrically Heated Table?

    + +
    +
    +
    + +
    +
    +
    +Image: The electrically heated table that we build in this manual. Photo: Marina Kálcheva. Model: Anita Filippova. +
    +
    +Image: The electrically heated table that we build in this manual. Photo: Marina Kálcheva. Model: Anita Filippova. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    + + + +

    Why build an electrically heated table?

    +

    For centuries, many cultures have used heated tables for thermal comfort in cold weather. Examples are the “kotatsu” in Japan, the “korsi” in the Middle East, and the “brasero de picon” in Spain. A heat source goes under a table, a blanket goes over it, and people slide their legs underneath. The micro-climate under the blanket keeps you comfortable, even though the space in which you find yourself is cold.

    +

    The heated table is an excellent example of our ancestors’ energy-efficient way of warming: heating people, not spaces. Historically, glowing charcoal from the fireplace heated the space under the table. While that provided sufficient warmth, it also carried a significant risk of fire and carbon monoxide poisoning. Nowadays, we can use electric heating technology instead. For example, the Japanese kotatsu is still in use, but it’s now working with a small electric heater fixed under the table surface.

    +

    In this manual, I will walk you through the making of an electrically heated work desk for one person. I have built the table for myself in the co-working space in Barcelona where I have my office now. The building, an old industrial warehouse, has very high ceilings, no insulation, and little sun in winter. It can get very cold here and conventional heating systems don’t work. My heated table turns out to be a perfect solution. I can power it with a solar panel, a wind turbine, a bike generator, or a battery. I can also plug it into the power grid.

    +
    +
    +Image: The author sitting at the heated table in his workspace, where the experiments took place. Over the table is a large wool blanket with an equally large cotton blanket on top of it. Photo: Marina Kálcheva. +
    +
    +Image: The author sitting at the heated table in his workspace, where the experiments took place. Over the table is a large wool blanket with an equally large cotton blanket on top of it. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    A heated table offers exceptional comfort. The lower part of your body gets immersed in heat as if you are baking in the sun or sitting in a hot bath. The warmth quickly spreads to the rest of your body through the bloodstream.

    +

    During a week of experiments in December 2024, with indoor air temperatures of 12-14°C (53-57°F), I recorded very low energy use for my freshly built heated table: between 50 and 75 watt-hours per hour. 1 Compare that to a conventional electric portable heater, which easily consumes 1,500 watt-hours per hour (and does not guarantee thermal comfort). My heated table uses as little electricity as charging a laptop or heating two liters of water for a hot water bottle (58 watt-hours per hour, assuming you reheat the water every two hours).

    +

    It takes about 15 minutes before the heat effect of the table becomes noticeable, one hour before it heats at full power, and two hours before it reaches its maximum energy efficiency. Energy use drops from 75 watt-hours in the first hour to between 50 and 60 watt-hours in the third hour, as the thermostat turns off the heating more often. The system accumulates heat in the carpet, the blankets, the table, and the human body. If you turn it off after three hours, walk away, come back 30 minutes later, and turn it on again, the table is heating at maximum capacity within 10 minutes.

    +

    The manual

    +

    Building an electrically heated table is quite a simple project, which requires few technical skills and little time. Once you get all the parts together, you can assemble a heated table in a few hours. The costs are limited, too. 2 The construction process consists of three parts: wiring the electrical system, programming the thermostat, and getting the textile layers and insulation right.

    +

    What you need:

    +
      +
    • Table
    • +
    • Carbon heating film
    • +
    • Thermostat
    • +
    • Blankets
    • +
    • Carpet
    • +
    • Extra insulation material (I used cork)
    • +
    +
    +
    +Image: How to assemble an electrically heated and insulated table. 1. Fix carbon heat foil to thin wood board 2. Fix wood board to table 3. Add cork insulation between wood board and table 4. Add blanket. Illustration: Marie Verdeil. +
    +
    +Image: How to assemble an electrically heated and insulated table. 1. Fix carbon heat foil to thin wood board 2. Fix wood board to table 3. Add cork insulation between wood board and table 4. Add blanket. Illustration: Marie Verdeil. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +
    +
    +Image: How to assemble an electrically heated and insulated table. Illustration: Marie Verdeil. +
    +
    +Image: How to assemble an electrically heated and insulated table. Illustration: Marie Verdeil. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +
    +
    +How to assemble an electrically heated and insulated table. Illustration: Marie Verdeil. +
    +
    +How to assemble an electrically heated and insulated table. Illustration: Marie Verdeil. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    Step 1: Get a table

    +

    This manual concerns a table for only one person - my writing desk. Unlike the Japanese and Middle Eastern examples, my table is adapted to a Western-style sitting position: not on the floor but on a chair. You can turn any table into a personal heat source, but some are better suited than others. Most importantly, you should be able to screw a flat heating foil (step 2) under the table. However, structural elements may complicate that, as is the case for my table (see the image below).

    +
    +
    +Image: The table before the conversion. Photo: Kris De Decker. +
    +
    +Image: The table before the conversion. Photo: Kris De Decker. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    I solved this by installing the heating foil on a thin wooden board which I then screwed against the supporting elements. However, for some other tables, this may not work. Choose a wooden table. Wood insulates relatively well, so a wooden table top already provides some of the insulation you need to maximize heat production. It’s easy to screw things onto a wooden table as well.

    +

    You can build a larger heated table that can seat more people, but in that case, you will have to connect several heating foils (step 2) and stitch several blankets (step 8) together. Low-tech Magazine will build a large heated table for several people during a workshop in Barcelona on January 25, 2025.

    +

    Step 2: Choose your heating element

    +

    In principle, any electric heating device can power a heated table. However, because a heated and insulated table is so energy efficient, you need an electric heater with a very low power use. The average portable electric heater is way too powerful for our purpose. Due to its high surface temperature, it could also cause burns or provoke a fire when you put it under a table.

    +

    Carbon heating film

    +

    The best heating element for an electrically heated table - and the one I am using in this manual - is carbon or infrared heating film. These very thin heating foils are primarily meant for electric floor and wall heating in buildings and vehicles, for protecting batteries or water tanks against the cold, or for warming beehives and terrariums. Infrared heating films are low-temperature, large-surface heaters, so there’s no risk of burns or fire through direct contact with skin or clothes. They are meant to operate at a maximum temperature of 40-45°C (104-113°F).

    +
    +
    +Image: The infrared heating foil, screwed against a thin wood board, ready to be fixed below the table surface. Photo: Kris De Decker. +
    +
    +Image: The infrared heating foil, screwed against a thin wood board, ready to be fixed below the table surface. Photo: Kris De Decker. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    Voltage

    +

    Carbon heating foils come in different voltages: 12V, 24V, and 110/220V. I chose a 12V heating foil to make my table compatible with my 12V solar installations and bike generator. If you have a 24V renewable power system, opt for a 24V heating foil.

    +

    If you want to plug the table into a wall socket, using mains power, there are two options. First, you could buy a 110V/220V heating foil, add a compatible plug, and connect it directly to the power socket. However, you really need to know what you are doing, because such high voltages carry the risk of electrocution. A safer DIY option is to use a 12V or 24V heat foil connected to a universal power adapter that steps down the voltage from 220V to 12 or 24V (comparable to the charger of a laptop). That is how I tested the table because I don’t have solar panels in my new office yet.

    +

    Size

    +

    Carbon heating film is available in different widths, for example, 20, 30, or 50 cm. You can buy it per running meter, and it’s flexible enough to be rolled up tightly for transport and storage. You can cut the foils to size in length at certain intervals. They have electric cables soldered to the positive and negative terminals, ready to plug into the power source. However, if you cut the foil into pieces, you will need to solder cables to all the new pieces. You can connect carbon heating foils in parallel, operated by only one thermostat (step 3). My table only has one heating foil, cut to size.

    +

    Power use

    +

    The power use of carbon heating foils - expressed in watts per square meter (W/m2) - varies because of two factors. First, heating foils are sold with different power outputs, mostly between 100 and 250 W/m2. Second, there’s size. If your 1m2 carbon foil reads 130 W/m2 and you cut it in half, then 0.5 square meters of heat foil will demand 65 watts. My table’s infrared heating film has a power use of 220W/m2. I first used 0.375 m2 of heating foil (50x70cm), resulting in a power use of about 82.5 watts. Then I cut off a centimeter to make it fit better under the table, and as a result power use dropped to roughly 75 watts.

    +

    Step 3: Choose your thermostat

    +

    You should only operate a carbon heating film with a thermostat. Without a thermostat, a heating foil could overheat and get damaged or cause fire. A thermostat also provides a stable comfort zone under the table. The thermostat turns the heater off when it reaches a set maximum temperature, and turns it back on when the temperature drops a few degrees lower.

    +

    The thermostat voltage must match the heating film voltage: if you have a 12V heating foil, you need a 12V thermostat. If you have a 24V heating film, get a 24V thermostat. The thermostat I use for my table is the W3230 DC 12V. It’s a widely sold device for all kinds of purposes. You need to wire the thermostat and set the temperature.

    +

    Step 4: Wire everything together

    +

    The thermostat is connected between the heating foil and the power source, as shown in the illustration below. The wiring may be different for other thermostat models.

    +
    +
    +Image: How to wire the thermostat to the heating foil and the power source. 1. Thermostat 2. Fuse 3. Temperature sensor 4. Carbon heat film. Illustration by Marie Verdeil. +
    +
    +Image: How to wire the thermostat to the heating foil and the power source. 1. Thermostat 2. Fuse 3. Temperature sensor 4. Carbon heat film. Illustration by Marie Verdeil. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    Cable size

    +

    Your cables must be thick enough for the current that flows through them. Infrared heating foils are sold with thick electric cables included, and they are often much longer than you need them to be. You can cut them shorter and use the rest to wire the whole system. If you want to use other cables, then use a multimeter to measure the current that the heating film draws. For example, my 12V heat foil requires 6.6 Amps, so my cables - in the complete circuit - should have a conductor cross-section of at least 2.63 mm2 (that’s 13 AWG gauge, check this chart).

    +

    Fuse

    +

    Any electrical system needs fuses for safety. Once you have cut your heating film to size, measure how much current runs through it. Next, install a fuse that is slightly above that value. My table uses 6.6 amps and I added an 8A fuse. Read more about fuses in our solar power manual.

    +

    Switch

    +

    You can add an off-on switch in your circuit to turn your heated table on and off. Use one that has a light indicator, so that you don’t forget to stop the heating system when you leave. You could also install a motion sensor under the table. That said, if you have wired your system correctly, used the right cable thickness, and added a fuse, leaving it on accidentally will not cause any safety problems.

    +

    Temperature sensor

    +

    The thermostat comes with a temperature sensor that you should install between the heating foil and the table (touching the heating foil). Make sure to fix this sensor securely, as it is essential to the proper and safe functioning of the heating system.

    +

    Where does the thermostat go?

    +

    Before you start wiring the system, decide where your thermostat goes, because it will determine the length of the cables you need. I have my thermostat installed under the table, hidden under the blankets. Once programmed, there’s no need to access it regularly (see further). Having the thermostat on top of the table means that you need a hole in your blanket for all the cables to go through. It also complicates the adding and removing of blankets.

    +
    +
    +Image: The thermostat at the side of the table, wired to the heat foil and the power source. Photo: Marina Kálcheva. +
    +
    +Image: The thermostat at the side of the table, wired to the heat foil and the power source. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    Step 5: Program the thermostat

    +

    To program the thermostat, connect it to the power source. First, select the “heating” function. My thermostat’s default setting was “cooling” and I struggled to make it work at first. Here’s the steps to follow:

    +
      +
    • Press and hold “set” to enter the code setting menu.
    • +
    • Press up or down to select “P0”.
    • +
    • Press “set” again to enter the code setting
    • +
    • Press up or down to select “H”
    • +
    • Wait 3 seconds and the setting is saved.
    • +
    +

    Next, you set the temperature at which you want the thermostat to turn the heat foil off. Heat foil manufacturers advise 40-45°C (104-113°F) during sustained use. Here’s what to do:

    +
      +
    • Short press “set” and the blue number will start blinking.
    • +
    • Press up or down to select the maximum temperature you want
    • +
    • Wait 3 seconds, and the setting is saved.
    • +
    +

    You can leave all other functions of the thermostat untouched.

    +

    Insulating the table

    +

    To build a comfortable and energy-efficient heated table, you need to understand that a carbon heating film emits heat on both sides. If you simply attach it under a table it does not get warmer than 30°C, which will not improve thermal comfort in a significant way. It’s only when the heating surpasses skin temperature (around 35°C/95°F) and core body temperature (37°C/99°F) that you start feeling the warmth radiating towards you.

    +
    +

    To reach sufficiently high temperatures, you need to cover one side of the heating foil with heat-insulating material.

    +
    +

    To reach sufficiently high temperatures, you need to cover one side of the heating foil with heat-insulating material. That will force most of the heat output to the other side - the side that radiates energy towards your body. The carbon heat film for a heated table is placed below the tabletop, radiating heat downwards, and so the insulation goes on top. 3 It consists of the wood table top, one or more blankets, and any additional insulation material that you add to the table surface.

    +

    When carbon heating films are used as floor heating, the insulation is below. The upper side needs to be protected by a material that is strong and easily radiates heat, such as ceramic tile. However, when you build an electrically heated table with the foil radiating heat downwards, there’s no need to protect the exposed side. It can be touched safely and nobody is walking on it.

    +

    Step 6: Insulate the heat foil

    +

    To improve the heat output of the infrared heating film, I have added 3 cm of cork insulation in the space between the foil and the table surface. I have not done any tests without the cork layer, but I am confident that the energy efficiency and thermal comfort of my table would not have been so excellent if it would not have been there. Other suitable natural insulation materials are wool, cellulose, wood fibers, hemp, and flax. You could also try radiator heat reflector foil.

    +

    Step 7: Get a carpet

    +

    If you put an electrically heated table on a cold floor, you will not be comfortable. The cold floor will suck up all the heat from the carbon heat foil, and your feet will conduct warmth to the floor. You need to insulate the floor, and you can do that with a wool carpet (or several wool carpets on top of each other). New wool carpets are expensive, but they don’t need to be much larger than the table and they can be found cheap second-hand. I got the very large Persian-style wool rug in my office for 50 euros, and I will add a second one on top to further improve the floor insulation.

    +

    Step 8: Find your blankets

    +

    The energy efficiency and thermal comfort of an electrically heated table are in large part determined by the type and size of the blankets you put over it. The blankets form part of the heat film insulation layer, but if they are long enough to reach the ground they also trap warm air under the table.

    +

    Radiant heating systems transfer energy to surfaces - including your body - and do not warm up the air directly. However, the air temperature under the table will slowly increase indirectly due to the higher surface temperatures of the blankets, the table, the carpet, and the person who sits at the table. During the experiments, the air temperature below my table at 25 cm above the floor increased by about 10°C (18°F).

    +
    +
    +Image: The electrically heated table featuring a 240x240cm Abbruzzo wool blanket. Photo: Marina Kálcheva. +
    +
    +Image: The electrically heated table featuring a 240x240cm Abbruzzo wool blanket. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    Wool

    +

    Choose a wool blanket. Wool traps heat very efficienctly, is much more flame-resistant than other textile materials, doesn’t get dirty or smelly easily, regulates humidity, and purifies the air. 4 New wool blankets can be pricey at hundreds of euros for the size we need. However, I purchased four second-hand wool blankets for 90 euros, two of them large enough to reach the ground. If you find a wool blanket that is ugly or stained, simply layer it with a nicer and cheaper cotton blanket.

    +

    Blanket size

    +

    The size of the blanket you need depends on the size of your table (length, width, height). My table measures 80 cm long, 56 cm wide, and 75 cm high. That means that my blanket should measure at least 230 cm (75+80+75cm) x 206 cm (75+56+75cm) to reach the ground. Can’t find a blanket that is large enough? You could build a lower heated table and sit on the ground, or you could sew several smaller blankets together. Smaller blankets can also be useful in combination with larger blankets, or they can serve as the only blanket when it’s not so cold (see further).

    +

    Work surface

    +

    Although it feels nice to work on a wool or cotton surface in winter, you can also put a wooden board on top of the blanket, cut to size, in order to protect the blanket from wear and dirt. Or, you drape a cotton tablecloth over the blanket, which is easier to wash than wool.

    +
    +
    +Image: The electrically heated table, ready to get dressed with various blankets. Photo: Marina Kálcheva. +
    +
    +Image: The electrically heated table, ready to get dressed with various blankets. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    Dressing and undressing the table

    +

    Any heating system must be able to be adjusted to achieve the desired comfort. For a central heating system, that happens by manipulating the thermostat. However, that doesn’t work so well for a heated table, because the temperature range of the carbon heating film is limited. Going below 38°C (100°F) will not provide a pleasant sensation of warmth, while prolonged heating above 45°C (113°F) may damage the heating film and make it too hot to touch. However, you can adjust thermal comfort in a wide range of air temperatures by “dressing” and “undressing” the table: by adding and removing textile layers, and by using them in different ways.

    +
    +

    You can adjust thermal comfort in a wide range of air temperatures by adding and removing textile layers, and by using them in different ways

    +
    +

    It’s not an exaggeration to say that the blanket becomes part of your clothing. Sitting at the table feels like wearing a very large dress that is heated from the inside. Just like we dress differently depending on the weather, adding and removing layers, the number of blankets, and how they are positioned can reflect the environmental conditions. I have tested the table with one to four blankets, and with every extra blanket energy efficiency and thermal comfort increase. On a very cold day, you could add two or three blankets hanging down the table. If you are still cold, you could drape extra blankets over your chair and shoulders. You could end up in a complete tent, heated from the inside, with just your head and arms sticking out.

    +
    +
    +Image: Anita Filippova works as an intern at the heated table in Low-tech Magazine’s office. There is an extra blanket wrapped around her shoulders and the chair. Photo: Marina Kálcheva. +
    +
    +Image: Anita Filippova works as an intern at the heated table in Low-tech Magazine’s office. There is an extra blanket wrapped around her shoulders and the chair. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +
    +
    +Image: Anita Filippova works as an intern at the heated table in Low-tech Magazine’s office. It’s not an exaggeration to say that the blanket becomes part of your clothing. Photo: Marina Kálcheva. +
    +
    +Image: Anita Filippova works as an intern at the heated table in Low-tech Magazine’s office. It’s not an exaggeration to say that the blanket becomes part of your clothing. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +

    However, as you add more blankets, the textile layer becomes increasingly heavy, and it takes more time to get in and out - to “dress” and “undress”. Consequently, a lighter cover is preferable if it provides sufficient comfort. On a chilly spring evening, one or two shorter blankets may be sufficient to keep you comfortable. I have tested the table like that and it did almost as good as with a large blanket. Energy use was somewhat higher and thermal comfort somewhat lower - I especially noted cold feet. However, should it be a bit warmer, I would prefer this setup because it’s more practical to get up from the desk.

    +

    Overheating can be solved without reaching for the thermostat as well: lift a corner of the carpet with your foot and let some heat escape, or remove one of the blankets. It would be more energy-efficient to turn down the thermostat, but the energy use of a heated table is already so low that there is room for some convenience. The comfort of a heated table can be further improved by a heated chair or bench, or by putting a screen behind your chair, covered with heat foil on one side and with insulation on the other side. That’s something for a future manual.

    +

    Safety

    +

    While an electrically heated table is much safer than a table heated by fire, it does entail some risks, including a fire. However, this can only happen when you ignore some crucial rules, which I repeat below:

    +
      +
    • Do not put a powerful electric space heater under a table with blanket.
    • +
    • Never operate a radiant heating foil without a thermostat.
    • +
    • Make sure your electric cables are thick enough.
    • +
    • Add a fuse to your electrical system.
    • +
    • Fix the temperature sensor securely: it should always stay in place.
    • +
    • Avoid blankets and carpets made of synthetic fibers, which are highly flammable. Your heated table won’t catch fire if you follow the advice above, but a fire can start elsewhere in the space. The best material for blankets and carpets is wool, which does not catch fire. Cotton is flammable, but it’s not as bad as synthetic material.
    • +
    • Long blankets are more energy efficient than shorter blankets. However, watch out with too long blankets because they can make you trip when you get in or out of the table.
    • +
    +

    Colophon

    +

    Heated table: Kris De Decker.

    +

    Illustrations: Marie Verdeil.

    +

    Photos: Marina Kálcheva.

    +

    Model: Anita Filippova.

    +

    Photoshoot location: AkashaHub Barcelona. Thanks to Carmen Tanaka.

    +

    Marie Verdeil and Roel Roscam Abbing gave feedback on the draft of this article.

    + + +

    Relevant books:

    + +
    +
    +Image: Powering the heated table with the bike generator. Photo: Marina Kálcheva. +
    +
    +Image: Powering the heated table with the bike generator. Photo: Marina Kálcheva. + + + +

    + View original image + + + View dithered image + +

    +
    +
    +
    +
    +
    + +
    + +
    + +
    + +
    +
    + + diff --git a/docs/posts/i-send-myself-automated-emails-to-practice-dutch.html b/docs/posts/i-send-myself-automated-emails-to-practice-dutch.html new file mode 100644 index 00000000000..663e3c09e67 --- /dev/null +++ b/docs/posts/i-send-myself-automated-emails-to-practice-dutch.html @@ -0,0 +1,86 @@ + + + + + + + James Routley | Feed + + + + Back + Original +

    I send myself automated emails to practice Dutch

    + +
    + +

    This project automates the daily delivery of an email containing three C1-level Dutch words, their English translations, and example sentences. The email looks like this:

    +

    Screenshot of email

    + +

    I created this project because I couldn't find a suitable app to help me build a C1-level Dutch vocabulary. I discovered that ChatGPT provides good word suggestions and decided to automate the process. Additionally, I know that I check emails more consistently than apps, making this method more effective for learning.

    +

    This project also provided an opportunity to refresh my skills in Terraform and Python.

    + +

    A CloudWatch Event Rule triggers a Lambda each morning at 7:00. The Lambda retrieves all previously sent Dutch words from DynamoDB. It then retrieves three new words from ChatGPT, stores them in DynamoDB, and sends them to SES. SES delivers them to the end user's email.

    +

    Picture of architecture

    + + +

    To deploy this project, ensure the following tools and configurations are in place:

    +
      +
    1. +

      Tools Installed:

      +
        +
      • Python (Tested with Python 3.8)
      • +
      • pip (Tested with pip 19.2.3)
      • +
      • Terraform (Tested with Terraform 1.10.3)
      • +
      • AWS CLI (Tested with 2.15.58)
      • +
      +
    2. +
    3. +

      Permissions: +Your AWS CLI user must have the appropriate permissions to deploy the resources. Refer to the Terraform files and apply the principle of least privilege.

      +
    4. +
    5. +

      Amazon SES Verified Email: +You need a verified email address in Amazon SES. This email must match the one used in the project.

      +
    6. +
    7. +

      Optional:

      +
        +
      • Use the provided setup.sh script or follow the steps in the script manually (might need small modifications if on Mac/Linux)
      • +
      • Alternatively, use the pre-zipped package: deployment_package.zip.
      • +
      +
    8. +
    + +
      +
    1. +

      Prepare Configuration:

      +
        +
      • Copy terraform.tfvars.example to terraform.tfvars.
      • +
      • Fill out the required values in terraform.tfvars.
      • +
      +
    2. +
    3. +

      Run the Terraform Workflow:

      +
      terraform init
      +terraform plan
      +terraform apply
      +
    4. +
    + +

    This project was intended as a weekend project, so there is room for improvement. Potential enhancements include:

    +
      +
    • Refactoring the Python code to be asynchronous for better performance and robustness.
    • +
    • Splitting the lambda_function.py file into smaller modules for better organization and maintainability.
    • +
    +

    However, since the project fulfills its purpose and is unlikely to grow further, I kept the implementation simple.

    +
    + + diff --git a/docs/posts/quiver-a-modern-commutative-diagram-editor.html b/docs/posts/quiver-a-modern-commutative-diagram-editor.html index 2676a1107ca..2eccd14fc4e 100644 --- a/docs/posts/quiver-a-modern-commutative-diagram-editor.html +++ b/docs/posts/quiver-a-modern-commutative-diagram-editor.html @@ -18,7 +18,7 @@

    Quiver: A Modern Commutative Diagram Editor

    -

    quiver

    +

    quiver

    quiver is a modern, graphical editor for commutative and pasting diagrams, capable of rendering high-quality diagrams for screen viewing, and exporting to LaTeX via tikz-cd.

    Creating and modifying diagrams with quiver is orders of magnitude faster than writing the @@ -27,23 +27,23 @@

    Quiver: A Modern Commutative Diagram Editor

    quiver features an efficient, intuitive interface for creating complex commutative diagrams and pasting diagrams. It's easy to draw diagrams involving pullbacks and pushouts,

    -

    Pullback

    +

    Pullback

    adjunctions,

    -

    Adjunction

    +

    Adjunction

    and higher cells.

    -

    3-cell

    +

    3-cell

    Object placement is based on a flexible grid that resizes according to the size of the labels.

    -

    Flexible grid

    -

    Arrow styles

    +

    Flexible grid

    +

    Arrow styles

    There is a wide range of composable arrow styles.

    -

    Colours

    +

    Colours

    And full use of colour for labels and arrows.

    -

    Screenshot mode

    +

    Screenshot mode

    quiver is intended to look good for screenshots, as well as to export LaTeX that looks as close as possible to the original diagram.

    -

    Keyboard hints

    +

    Keyboard hints

    Diagrams may be created and modified using either the mouse, by clicking and dragging, or using the keyboard, with a complete set of keyboard shortcuts for performing any action.

    -

    Export to LaTeX

    +

    Export to LaTeX

    When you export diagrams to LaTeX, quiver will embed a link to the diagram, which will allow you to return to it later if you decide it needs to be modified, or to share it with others.

    diff --git a/docs/posts/the-new-science-of-controlling-lucid-dreams.html b/docs/posts/the-new-science-of-controlling-lucid-dreams.html new file mode 100644 index 00000000000..42333ff9b2f --- /dev/null +++ b/docs/posts/the-new-science-of-controlling-lucid-dreams.html @@ -0,0 +1,22 @@ + + + + + + + James Routley | Feed + + + + Back + Original +

    The new science of controlling lucid dreams

    + +

    I routinely control my own dreams. During a recent episode, in my dream laboratory, my experience went like this: I was asleep on a twin mattress in the dark lab room, wrapped in a cozy duvet and a blanket of silence. But I felt like I was awake. The sensation of being watched hung over me. Experimenters two rooms over peered at me through an infrared camera mounted on the wall. Electrodes on my scalp sent them signals about my brain waves. I opened my eyes—at least I thought I did—and sighed. Little specks of pink dust hovered in front of me. I examined them curiously. “Oh,” I then thought, realizing I was asleep, “this is a dream.”

    In my dream I sat up slowly, my body feeling heavy. In reality I lay silently and moved my eyes left to right behind my closed eyelids. This signal, which I had learned to make through practice, was tracked by the electrodes and told the experimenters I was lucid: asleep yet aware I was dreaming. I remembered the task they had given me before I went to sleep: summon a dream character. I called out for my grandmother, and moments later simple black-and-white photographs of her appeared, shape-shifting and vague. I could sense her presence, a connection, a warmth rolling along my spine. It was a simple and meaningful dream that soon faded into a pleasant awakening.

    Once I was awake, the scientists at the Dream Engineering Lab I direct at the University of Montreal asked me, through the intercom, about my perception of characters, any interactions with them and how they affected my mood on awakening. Even in her unusual forms, my grandmother had felt real, as if she had her own thoughts, feelings and agency. Reports from other dreamers often reflect similar sensations—the result of the brain’s striking ability in sleep to create realistic avatars we can interact with. Researchers suspect that these dreamy social scenarios help us learn how to interact with people in waking life.


    On supporting science journalism

    If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


    Many people have had lucid dreams. Typically you are immersed in an experience, then something seems “off,” and you realize you are actually dreaming. Often people wake up right after they become lucid, but with practice you can learn how to remain lucid and try to direct what happens. In the lab we can prime sleepers to have lucid dreams by waking them and then prompting them as they fall back asleep. At home you can try waking up and visualizing a lucid dream (most effectively in the early morning), creating a strong intention to become lucid before falling asleep again.

    In the past few years scientists have discovered that while someone is having a lucid dream, they can communicate with an experimenter in a control room, and that person can communicate with the dreamer, giving them instructions to do something within the dream. In a landmark paper published in 2021 in Current Biology, researchers in the U.S., the Netherlands, France and Germany provided evidence of two-way, real-time communication during lucid dreams. At two locations researchers presented spoken math problems to sleeping participants, who accurately computed the correct solution. When one team asked, “What is eight minus six?” the dreamers answered with two left-right eye movements. Another team asked yes-or-no questions, and lucid dreamers frowned to indicate “no” and smiled for “yes,” with their movements recorded by electrodes around their eyebrows and mouth.

    Sleep researchers are now using emerging technologies to interface directly with the dreaming mind. Meanwhile neuroimaging studies are revealing the unique patterns of brain activity that arise during lucid dreaming. This research could lead to wearable devices programmed with algorithms that detect opportune moments to induce lucidity in people as they sleep. As researchers, we are excited about this possibility because directing, or “engineering,” a dream may allow people to reduce the severity or frequency of nightmares, improve sleep quality and morning mood, and even enhance general health and well-being.


    Scientists have known that lucid dreams are real since the late 1970s. In 1980 Stephen LaBerge, then a Ph.D. student at Stanford University, published a paper about the side-to-side eye-signaling method that proved lucidity’s existence. Experts went on to demonstrate that lucid dreamers could control their breathing patterns and muscle twitches, which provided ways for them to communicate with the awake world. Imaging studies revealed more wakelike activity in the brain during lucid dreams than nonlucid dreams. This momentum culminated in the first Dream x Engineering Workshop at the Massachusetts Institute of Technology Media Laboratory, which I led in 2019. LaBerge was there, along with 50 dream scientists from around the world. For two days we explored how we might engineer dreams. We focused on using new technologies to induce lucid dreams in novices and exploring the brain basis and health benefits of lucid dreaming on a larger scale.

    Since then, many more researchers have become interested; progress has been quick and revealing. Investigators working in more than a dozen countries have learned how to induce and record lucid dreams with wearable devices and even use the techniques to treat nightmares, insomnia, and other sleep problems.

    Lucid dreamers can communicate with people in the waking world by making eye movements, frowning or clenching their hands.

    Treating nightmares is an important goal because they are linked to all manner of psychiatric and sleep disorders, including addiction, psychosis, narcolepsy and insomnia, as well as higher risks for anxiety, depression and suicide. The perils are especially relevant for people with post-­traumatic stress disorder who experience nightmares, which for more than half of PTSD patients replay traumatic events again and again, potentially retraumatizing them each time. PTSD sufferers with severe nightmares have more acute symptoms and a fourfold greater risk of suicide compared with people with PTSD who don’t experience such dreams.

    In a recent study, 49 PTSD patients from nine countries who had long histories of traumatic nightmares attended a week-long virtual workshop with lucid dreaming expert, trainer and author Charlie Morley. To learn how to induce lucid dreams that might heal, participants imagined positive versions of their nightmares in which they engaged curiously with the dream or with threatening dream characters. One patient reported calling out into the dreamscape, “Dreamer, heal my body!” She then experienced roaring in her ears as her body vibrated forcefully. Another patient asked to meet and befriend her anxiety, which led to the emergence of a giant, golden lozenge that evoked her amazement and gratitude. After just one week of training, all the participants had reduced their PTSD symptoms. They also recalled fewer nightmares.

    Laboratory studies have yielded similar results. One person with weekly nightmares took part in a study led by one of my lab members, Remington Mallett. While sleeping in an enclosed lab bedroom with more than a dozen electrodes pasted on her scalp and face, this young woman had a nightmare. She dreamed she was in a church parking lot, and an approaching group of people with pitchforks was chanting, “Die, die, die.” She realized she was asleep and dreaming in the lab and that the experimenter was watching from the other room. She gave a left-right eye signal, knowing the experimenter would wake her up. She later said, “In the dream I was aware that you [the experimenters] were there and reachable.” She gave the signal because she knew it would get her out of the dream, and it did. Her nightmare frequency decreased after this lab visit, and four weeks later it was still lower than it had been before the experiment.

    Even just the moment of becoming lucid can sometimes bring immediate relief from a nightmare because you realize you are dreaming and that there is no real danger—similar to the relief we feel when we wake up from a nightmare and realize it was just a dream. Often when people become lucid during a nightmare, they decide to simply wake up—an immediate solution. Closing and opening your eyes repeatedly is another way to intentionally wake up from a dream, which could be useful during nightmares when at home, outside a lab.

    Lucid dreaming could improve sleep health more generally. For example, we now know that people with insomnia have more unpleasant dreams than sound sleepers, including dreams in which they feel like they are awake and are worrying about not sleeping. In one recent study, insomnia patients underwent two weeks of lucid-­dream training that included setting presleep intentions of becoming lucid and visualizing the kind of lucid dream they wanted to have. These practices led to less severe insomnia and less frequent anxiety and depressive symptoms in participants over time. It could be that the increased lucidity made them more aware of the fact that they were asleep, thereby improving their subjective sense of sleep quality. It’s also likely that lucid dreaming made their dreams more pleasant; my team and other researchers have shown numerous times that both lucid and positive dreams are associated with better sleep quality, mood and restfulness after waking.


    To improve dream engineering, we need to have a clearer understanding of what is happening in the brain during lucid dreams. Recent work in sleep and neuro­science labs is revealing the brain patterns involved.

    Our most vivid dreaming takes place during rapid-eye-movement, or REM, sleep—the light phase of sleep when the brain is most active and wakelike, especially when close to the time that a person would usually get up. Lucidity may enhance one of the main functions of REM sleep: to refresh connections between the prefrontal cortex, where our brains control our thoughts and decisions, and the amygdala, where they generate our emotions. Sleep helps us control our emotions every day. When REM sleep is disrupted, the prefrontal cortex becomes less effective at regulating arousal both during sleep and during the subsequent day. This creates a vicious cycle for people with nightmares and insomnia: a night of poor sleep is followed by a worse mood and decreased defenses against stress the next day, leading to another night of disturbed sleep, and so on.

    In contrast, lucid dreaming is associated with increased activation in the prefrontal cortex. To have stable lucid dreams, you need to remain calm and attentive, or you will probably wake up from excitement. Maintaining self-control seems to be central to having positive lucid-dream experiences, resolving nightmares, and boosting creativity and mood. That was the conclusion of a recent study by Mallett, who surveyed 400 posts on Reddit to identify exactly when and how lucid dreams are helpful for improving mental health.

    We’re learning that the real mental health benefits of lucid dreaming seem to come when dreamers can direct the content. Maintaining self-control in dreams is a bit of a learned skill. Similar to mindfulness, the dreamer must practice remaining both calm and focused while in an unpredictable and unstable dream. People can then learn to control dreams by using tricks of attention such as opening and closing their eyes and expecting, or even commanding, an object such as the Eiffel Tower to appear. This skill most likely relies on specific patterns of neural activation and on cognitive practice. To be at once an actor in and director of a lucid dream requires delicate cognitive control and flexibility, but expert lucid dreamers—people who have lucid dreams at least weekly—would probably say “control” is not the most accurate term. It’s more of an improvisation, a balancing act of guiding the dream toward desired content while allowing it to arise spontaneously—like a jazz musician suggesting a rhythm or melody but also listening and adjusting to what the other musicians are playing.

    To better understand how this improvisation happens, my colleague Catherine Duclos is studying the basic brain patterns of lucid dreaming in expert lucid dreamers in our Montreal lab. The volunteers sleep normally for the first half of the night, but in the early morning experimenters awaken them to place a cap on their head that is used for electroencephalogram (EEG) tests. The cap has 128 electrodes—many more than are typically used in sleep studies. After about 30 minutes, when all the electrodes are well positioned, the subjects return to sleep, intending to have a lucid dream.

    Once Duclos has identified patterns of brain-wave activity that occur only in lucid dreams, she can use that information in the lab to try to directly enhance lucidity and control by augmenting activation in the cortex with electrical brain stimulation. After decades of characterizing sleep as an “offline” brain process, scientists now view the sleeping brain as “entrainable”—it is malleable and can be controlled through external stimulation. By applying an electric current of a specific wavelength to the scalp, scientists can modulate the rhythm of the sleeping brain to make brain waves faster and more wakelike in REM sleep or slower as they are in deep sleep.

    One woman having a nightmare produced a left-right eye signal, knowing a researcher watching her would wake her up.

    Duclos plans to use transcranial alternating-current stimulation (tACS) to shape brain rhythms so that they are more similar to those in lucid dreams, based on the patterns she finds in the dreams she is recording now. Researchers in prior studies have also attempted to use tACS to induce lucid dreams, with mixed results. We hope the increased resolution of high-density EEG will help.

    Another study of expert lucid dreamers will also help clarify how cognitive control works in a lucid dream. Tobi Matzek, one of my Ph.D. students and an expert lucid dreamer, spent four nights in our lab being recorded by EEG. Each night, as early morning approached, we awakened her and presented a 20-minute instruction over speakers in the bedroom, training her to pay attention to what she was experiencing after we woke her and to maintain this awareness when sleeping. She then fell back asleep and became lucid repeatedly. She used control strategies such as calling out requests for desired characters in the dream. In one instance, Matzek said she called for “God to appear as a perceivable form,” and an emerging ball of white light brought with it feelings of euphoria. She awoke in awe.

    Matzek had eight lucid dreams, in which she summoned dream figures whom she perceived as having higher levels of self-­control and independent thoughts than typical dream characters. (Her dreams described in this article were presented at a recent conference.) This study is showing us how our sleeping brain creates dream characters and just how meaningful fictional and at times otherworldly social scenarios can feel. Lucid dreamers who can conjure up characters rate these dreams as more positive and mystical than other dreams. It’s possible that lucid dreams could create opportunities to visit with lost loved ones, spiritual teachers, or family and friends, but so far we know little about how to generate such experiences or how they might impact waking life.

    Matzek and other expert lucid dreamers sometimes ask big questions during their dreams. One night Matzek asked, “Can I experience the creation of the universe?” and she dreamed of being “immersed in outer space, surrounded by stars and planets and other huge celestial objects.... The darkness of space is deep and rich, and every planet and star is superbright.” At one point she felt overwhelmed by the vastness, but a spiritual presence helped her stay calm. The end result, she says, was “absolutely breathtaking.” She felt weightless and was “slowly spinning head over heels as I take in everything around me. Many [stars] are brown and red, and it’s like they’re all glowing. I know that I am actually seeing the universe uncreated, back in time.” Understanding what’s happening inside the brain during these altered states of consciousness could reveal how to induce such mystical experiences on demand.

    Dreams are ephemeral, but they feel real and impactful because the brain and body experience them as real. Brain imaging shows that our dreams are read as “real” in the sensorimotor cortex. When we dream of clenching a fist, the motor cortex becomes more active, and muscles in the forearm twitch. Dreaming is the ultimate reality simulator.

    Because the body experiences physical reality in sleep, we can use visual cues, sounds, and other sensations—pressure, temperature, vibration—to sculpt the dreamworld. In my lab we use flashing lights or beeping sounds during presleep lucidity training. As we did for Matzek, we wake up participants in the early morning and pursue a 20-minute training: while they lie in bed with their eyes closed, a recorded voice instructs them to remain self-aware and to pay attention to their ongoing sensory experiences. We present the flashing lights and beeping alongside this tracking so the sensory cues will serve as reminders to remain lucid.

    When participants go back to sleep, we present the cues again during REM sleep to “reactivate” the associated mind state. Fifty percent of the time, participants have a lucid dream—a higher rate than without the cues. Beeping sounds played during sleep caused one person to dream of shopping in a supermarket: “I was just putting things in my trolley, and I could hear the beeping, and it was like I was getting loads of messages on my phone telling me what to buy in Tesco ... things like, ‘Buy some biscuits.’” The cues made their way into the dream and served as reminders to become lucid.

    Dream engineers around the world, such as Daniel Erlacher and Emma Peters of the Institute of Sport Science at Bern University in Switzerland, are exploring new types of sensory stimuli to more reliably induce lucid dreams. These cues include subtle vibrations that could be delivered by a wearable headband or smart ring, little electric pulses that cause muscles to twitch, or vestibular stimulation—an electric current sent behind the ears that induces sensations of falling or spinning. These sensations might be more easily detectable by dreamers than flashing lights and beeping sounds, perhaps because dreams already have so much competing visual and auditory content.

    Lucid dreamers can communicate with people in the waking world by controlling their sleeping bodies. In addition to making deliberate eye movements, lucid dreamers can frown, clench their hands or control their breathing, and scientists can record all of this in the lab. They can measure respiration with a belt around the torso that detects expansion and contraction of the lungs or with a little sensor on the lip that can track the flow of air in and out of the nose. Kristoffer Appel of the Institute of Sleep and Dream Technologies in Germany has even decoded word messages from lucid dreamers. The dreamers held their thumb out in front of their face, traced letters, and followed the movement of their thumb with their eyes. Dreamers could say, with their eye movements, “Hello, dream.” We are learning to converse with lucid dreamers, getting ever more complex messages into and out of the sleeping brain and body to direct and record dreams in real time.


    I expect that the mental health applications of lucid dreaming will grow. Achilleas Pavlou and Alejandra Montemayor Garcia of the University of Nicosia Medical School in Cyprus are developing wearable devices programmed with machine-learning algorithms to detect when nightmares are occurring based on bio-­signals such as brain activity, breathing and heart rate. My team, along with collaborators at the Donders Institute in the Netherlands and the IMT School for Advanced Studies in Lucca, Italy, is testing a simple EEG headband that can detect REM sleep and deliver the kinds of sensory cues I mentioned earlier to induce lucid dreams. If successful, such dream aids could be made widely available at home. Headbands and watches could help people call for help to escape nightmares—or just help them induce lucid dreams or direct the content for more satisfying dreams.

    People could also use these tools simply to have exotic recreational experiences. In 2024 Adam Haar, who recently finished a postdoctoral fellowship at M.I.T., and artist Carsten Höller created an exhibit in a museum in Basel, Switzerland, that welcomed overnight visitors. A bed on six robotic legs created a rocking motion before and during sleep, while a fly agaric mushroom sculpture spun above the bed. In the liminal space before sleep onset, the dreamer was reminded to dream of flying, and rocking motions and flashing red light from the installation seeped through their body and eyelids.

    These stimuli were replayed at various moments throughout the night, and the sleeper was then awakened for dream reports. One visitor noted visions of “floating on the sea ... and climbing inside the squishy stalk of a giant mushroom from the bottom and being engulfed in its gravityless squishy innards,” even of being buffeted up from the ground on the wind. In the weeks after, this woman reported “countless flight-adjacent or weightlessness dreams,” such as “gliding in the air along miles of zip line through a Swiss-looking city.”

    For lucid dreamers, flying is one of the most sought-after and euphoric experiences. In a 2020 study led by Claudia Picard-Deland of the University of Montreal’s Dream and Nightmare Laboratory, participants used a virtual-reality flight simulation prior to taking a nap and then recorded their dreams for two weeks at home. Playing in the virtual-reality environment for just 15 minutes led to an eightfold increase in flying dreams. And even though the study was not designed to induce lucidity, the experimenters found that flying dreams elevated it. One participant had their first-ever lucid dream: “I succeeded to make myself float a little, then once I realized that it worked, that I had control, I put my hands just like Iron Man at my sides.... I heard a big boom and a constant noise, as if I had plane propellers at the ends of my arms, and I accelerated so fast I couldn’t believe it. I screamed with joy as loud as I could.” The participant marveled at “the quantity of detail of physical sensations that I felt from flying, the intense acceleration, the wind,” as well as seeing, from above, a beautiful city from the future.

    Other gadgets may not be far off. Haar developed Dormio during his Ph.D. work at M.I.T. It is basically a glove with sensors that can measure muscle flexion, heart rate and electrical skin activity, all of which change as you drift off to sleep. When Dormio detects that you’ve just fallen asleep, it gives a spoken prompt to influence what you dream about. After a couple of minutes, it wakes you up to recall imagery, and if you follow this process several times, you can engineer brief dreams that have content you desire.

    Nathan Whitmore of the M.I.T. Media Lab has developed a phone app to deliver voice training for lucid dreaming, paired with auditory cues presented again during sleep. Initial results with more than 100 participants showed that presleep training brought on lucid dreams. Ken Paller of Northwestern University and Mallett have discovered EEG signatures that seem to precede the onset of lucidity. Such measures could lead to algorithms that detect opportune moments to deliver sensory cues and induce lucid dreams. Pair these with a flying game prior to sleep, and you might be in for a fun night.

    + + diff --git a/docs/posts/the-trap-of-i-am-not-an-extrovert.html b/docs/posts/the-trap-of-i-am-not-an-extrovert.html deleted file mode 100644 index 79b994260ae..00000000000 --- a/docs/posts/the-trap-of-i-am-not-an-extrovert.html +++ /dev/null @@ -1,103 +0,0 @@ - - - - - - - James Routley | Feed - - - - Back - Original -

    The trap of "I am not an extrovert"

    - -

    - - - - - - banner - - -

    -

    It was one of those farewell dinners we had hosted back in college. I was in the third year. I was surrounded by two junior batches and two senior batches in the same party. And being the elected governor of the club, my job was to ensure that the younger ones learn and have a good time with the seniors.

    -

    On this one particular night, a lot of people were bonding, and it was amazing. But then I saw this young boy sitting in the corner by himself. Let’s call him Aditya. It was a bit dark and Aditya was trying to just get by, unnoticed. He looked overwhelmed and disinterested. I went up to him with a big smile and a lot of kindness, and said - “Hey man, what’s up? Why don’t you approach and talk to a few here? They are friendly, you know. You like Open Source, so maybe talk to that guy (pointing to a senior of mine) and ask him how he got started.”.

    -

    His response that day still echoes in my ear. It felt like a voice coming out of a closed heart. And my words had fallen flat on him. He said, with a smirk, “I am not an extrovert”.

    -

    I looked into his eyes and saw the disgust for everyone in the room. Aditya was a proud introvert, as I like to call them. Little did he know that most people in that club were introverts, true nerds. They would rather sit in front of the computer than go out during the yearly college social fest. Yet, Aditya thought he is unique and does not belong there. Later that year, Aditya left the club.

    -

    I have met a lot of Adityas in life. They sit quietly in one group, but are the loudest in another. They use introversion as an excuse to not grow. What they don’t realize is - everyone is an introvert and everyone is an extrovert. It’s often an unconscious choice we make. For some, it is easy and natural, for some it takes a bit more effort.

    -

    Be an extrovert at work

    -

    In the traditional meaning, everyone should be an extrovert at work. But there are two big challenges with this. First, people don’t know what that means. Does being an extrovert at work mean you need to shout “Good morning!” to everyone like a human alarm clock? No, but if you want to try, let me know how that works out. Does it mean I should be shouting in every single meeting? Does it mean I get to speak the most in conversations?

    -

    The second challenge is the misconception that communication should be natural and effortless. While the reality is, it takes effort to communicate and drains energy for everyone. Most speakers and developer advocates (I have been one) often enjoy the calmness when the meeting gets over. People don’t realize everyone is putting in some effort, even though it may not feel that way.

    -

    It’s a skill, learn and practice it like a skill

    -

    Let me start by saying that work is not a social club. Everything you do at work is part of work. Talking to your colleagues about their life? It’s part of work. You are building relationships which will help you out sometime later. No matter what role you play, you will always have to communicate and collaborate with others. If this is something you disagree with, you should go back to the drawing board and think deeply.

    -

    Allow me to paint you a picture. Ram and Shyam are two senior engineers in a team. (I could have picked more modern characters, but in this world, Ram and Shyam are such classy names, how can I not pick them?!)

    -

    - - - - - - ramandshyam - - -

    -

    Ram has been writing code for five years. He likes to sit in a cave and do his work. He has often rescued the team in situations with critical bugs. However, he’s a bit shy and doesn’t like to share what he is working upon. He’d rather disappear for a week to finish the work than collaborate with others. As a result, he is less visible to his colleagues and leadership.

    -

    Shyam has also been writing code for five years. You wouldn’t call him a genius like Ram, but he asks excellent questions. Not trying to be intentionally difficult, but he likes to ensure that everyone is on the same page. He likes to break down his 5 days of work, in smaller chunks and often invites others to collaborate with him. He’s good at being transparent and delegating. Likewise, he’s always excited to do a demo and talk about his work with colleagues from different teams. As a result, he is more visible to everyone around him.

    -

    The manager likes both equally. But when the budget is tight and he or she has to promote only one of them, everyone favors Shyam slightly more than Ram. Natural biases.

    -

    Ram learns about Shyam’s promotion and feels a bit disappointed. He goes back home and says to himself “I am not an extrovert”, consoles himself and moves on with life.

    -

    You can find these Rams and Shyams in almost every group in the world. The biggest misconception that Rams have is that they think communication skills can’t be learned.

    -

    It needs to be treated as a necessary skill. It has a direct impact on you, your family and your career. Why wouldn’t you do what brings more prosperity back home?

    -

    Being outspoken does not mean Fast Thinking

    -

    If you haven’t heard the concept of “Fast thinking” vs “Slow thinking”, you should check out Thinking Fast and Slow. Here’s a summary anyway.

    -

    Our brains have two modes when it comes to making decisions. There is a fast part of the brain which makes most of the day-to-day decisions in an autopilot mode. For example, when brushing teeth, you don’t plan out what quadrant of the teeth you should begin with. It all just happens while you think about the delicious breakfast you’ll be having later.

    -

    Slow thinking requires significantly higher energy, but is responsible for most of the learning and high quality work that you produce. Since fast thinking requires less energy, the brain ideally wants to do everything in this mode.

    -

    - - - - - - quadrants - - -

    -

    This graph is mostly inspired by engineers, but it applies to any high skilled labor or someone in R&D. People like working with others who collaborate with them. But they also prefer those who think deeply when it’s needed.

    -

    While the concept of fast and slow thinking helps us understand the mental energy required for thoughtful collaboration, it also ties into another dynamic we often see - the frontbencher vibes.

    -

    Frontbencher vibes

    -

    While the diagram above represents my subjective view of the world, I do want to address the “Outspoken” and “Fast thinker” category of people who emit the frontbencher vibes. They are the ones to always raise their hand when someone says “does anyone have a question?” There is a desire to learn and grow, but also a hidden desire to be seen. They often do not care what others think of them.

    -

    I sometimes emit frontbencher vibes myself. But there is a thin line in being a frontbencher and being an annoying human being. I usually shut up when I sense that I am being the annoying one in the room. It’s one of the most difficult social skills that I have found in life.

    -

    Be an introvert in real life friendships

    -

    Some of you might be thinking “This extrovert doesn’t understand what it is like to be an introvert”. Let me disprove that. A while ago, I wrote about BFS vs DFS in friendships. It’s inspired by the concepts of Breadth-first search and Depth-first search when traversing trees. Most long-lasting friendships are often two introverts talking to each other and exploring each other’s depths. As I grow older, I realize it’s rare and precious. Making friends gets harder and harder as you age.

    -

    DFS friendships are like diving deep into a conversation about life at 3 AM. BFS friendships? More like, “So, how’s the weather?” ten times over. Both have their charm.

    -

    If you read the article or know me personally, you’ll realize that I am an extrovert on camera, but an introvert in real life. I find this lifestyle fascinating. Most high-growth people I know closely are like this.

    -

    Your mileage will vary

    -

    You can always find people who are more extroverted or introverted than you. Instead of hiding behind some sort of pride in a personality trait, you should recognize the necessary skills and learn them, especially at work. That being said, slow thinking and collaboration goes a long way.

    - - diff --git a/docs/posts/thermodynamic-model-identifies-how-gold-reaches-earth-s-surface.html b/docs/posts/thermodynamic-model-identifies-how-gold-reaches-earth-s-surface.html deleted file mode 100644 index 7b4cecf84ff..00000000000 --- a/docs/posts/thermodynamic-model-identifies-how-gold-reaches-earth-s-surface.html +++ /dev/null @@ -1,72 +0,0 @@ - - - - - - - James Routley | Feed - - - - Back - Original -

    Thermodynamic model identifies how gold reaches Earth's surface

    - -
    - -
    -
    -
    - Study identifies how gold reaches Earth's surface -
    - Credit: Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2404731121 -
    -
    -

    A research team including a University of Michigan scientist has discovered a new gold-sulfur complex that helps researchers understand how gold deposits are formed.

    - - - -

    Gold in associated with volcanoes around the Pacific Ring of Fire originates in Earth's mantle and is transported by magma to its surface. But how that gold is brought to the surface has been a subject of debate. Now, the research team has used numerical modeling to reveal the specific conditions that lead to the enrichment of gold in magmas that rise from the Earth's mantle to its surface.

    -

    Specifically, the model reveals the importance of a gold-trisulfur complex whose existence has been vigorously debated, according to Adam Simon, U-M professor of Earth and environmental sciences and co-author of the study.

    -

    The presence of this gold-trisulfur complex under a very specific set of pressures and temperatures in the mantle 30 to 50 miles beneath active volcanoes causes gold to be transferred from the mantle into magmas that eventually move to the Earth's surface. The team's results are published in the Proceedings of the National Academy of Sciences.

    -

    "This thermodynamic model that we've now published is the first to reveal the presence of the gold-trisulfur complex that we previously did not know existed at these conditions," Simon said. "This offers the most plausible explanation for the very high concentrations of gold in some mineral systems in subduction zone environments."

    -

    Gold deposits associated with volcanoes form in what are called . Subduction zones are regions where a continental plate—the Pacific plate, which lies under the Pacific Ocean—is diving under the continental plates that surround it. In these seams where continental plates meet each other, magma from Earth's mantle has the opportunity to rise to the surface.

    - - -

    "On all of the continents around the Pacific Ocean, from New Zealand to Indonesia, the Philippines, Japan, Russia, Alaska, the western United States and Canada, all the way down to Chile, we have lots of active volcanoes," Simon said. "All of those form over or in a subduction zone environment. The same types of processes that result in are processes that form gold deposits."

    -

    Gold is happy in Earth's mantle above the subducting ocean plate. But when the conditions are just right that a fluid containing the trisulfur ion is added from the subducting plate to the mantle, gold strongly prefers to bond with trisulfur to form a gold-trisulfur complex. This complex is highly mobile in magma.

    -

    Scientists have previously known that gold complexes with various sulfur ions, but this study, which includes scientists from China, Switzerland, Australia and France, is the first to present a robust thermodynamic model for the existence and importance of the gold-trisulfur complex.

    -

    To identify this new complex, the researchers developed a thermodynamic model based on lab experiments in which the researchers control pressure and temperature of the experiment, then measure the results of the experiment. Then, the researchers developed a thermodynamic model that predicts the results of the experiment. This thermodynamic model can then be applied to real-world conditions.

    -

    "These results provide a really robust understanding of what causes certain subduction zones to produce very gold-rich ore deposits," Simon said. "Combining the results of this study with existing studies ultimately improves our understanding of how form and can have a positive impact on exploration."

    - - -
    -

    More information: - Deng-Yang He et al, Mantle oxidation by sulfur drives the formation of giant gold deposits in subduction zones, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2404731121 - -

    -
    - - - - -
    -

    Citation: - Thermodynamic model identifies how gold reaches Earth's surface (2024, December 24) - retrieved 27 December 2024 - from https://phys.org/news/2024-12-thermodynamic-gold-earth-surface.html -

    -

    - This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no - part may be reproduced without the written permission. The content is provided for information purposes only. -

    -
    - -
    - - diff --git a/docs/posts/weeknotes-28.html b/docs/posts/weeknotes-28.html index 7bbc392294b..bb79bd97431 100644 --- a/docs/posts/weeknotes-28.html +++ b/docs/posts/weeknotes-28.html @@ -68,7 +68,7 @@

    Weeknotes #28

  • i feel like i'm really clear on what i want this year without a big chunk of planning, but i also want conflicting things e.g. fancy sleeve tattoo vs. saving a big emergency fund.
  • since i guess i won't write another thing on here weeknotes wise till after xmas i hope you all have a fantastic one if you celebrate, and a great day otherwise too! happy solstice for tomorrow, even if you don't celebrate it too (brighter days are coming, northern hemisphere! the hateful sun is going back in her cage, southern hemisphere!) xo
  • -

    Last updated 6 days, 15 hours ago

    +

    Last updated 6 days, 16 hours ago


    If you liked this post, please message, email, or follow me online, check out my work in progress, share this post or subscribe to my posts by RSS!