ads

New best story on Hacker News: Microsoft makes Zork open-source

Microsoft makes Zork open-source
471 by tabletcorry | 197 comments on Hacker News.


New best story on Hacker News: Android and iPhone users can now share files, starting with the Pixel 10

Android and iPhone users can now share files, starting with the Pixel 10
484 by abraham | 304 comments on Hacker News.


New best story on Hacker News: Thunderbird adds native Microsoft Exchange email support

Thunderbird adds native Microsoft Exchange email support
445 by babolivier | 131 comments on Hacker News.


New best story on Hacker News: Your smartphone, their rules: App stores enable corporate-government censorship

Your smartphone, their rules: App stores enable corporate-government censorship
459 by pabs3 | 247 comments on Hacker News.


New best story on Hacker News: The patent office is about to make bad patents untouchable

The patent office is about to make bad patents untouchable
507 by iamnothere | 71 comments on Hacker News.


New best story on Hacker News: Meta Segment Anything Model 3

Meta Segment Anything Model 3
482 by lukeinator42 | 94 comments on Hacker News.


New best story on Hacker News: URLs are state containers

URLs are state containers
383 by thm | 169 comments on Hacker News.


New best story on Hacker News: Facts about throwing good parties

Facts about throwing good parties
393 by cjbarber | 145 comments on Hacker News.


New best story on Hacker News: Show HN: Why write code if the LLM can just do the thing? (web app experiment)

Show HN: Why write code if the LLM can just do the thing? (web app experiment)
365 by samrolken | 260 comments on Hacker News.
I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it's catastrophically slow (30-60s per request), absurdly expensive ($0.05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from "how do we generate better code?" to "why generate code at all?"

New best story on Hacker News: Visopsys: OS maintained by a single developer since 1997

Visopsys: OS maintained by a single developer since 1997
344 by kome | 68 comments on Hacker News.


New best story on Hacker News: Hard Rust requirements from May onward

Hard Rust requirements from May onward
328 by rkta | 572 comments on Hacker News.


New best story on Hacker News: Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop

Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop
319 by threeturn | 179 comments on Hacker News.
Dear Hackers, I’m interested in your real-world workflows for using open-source LLMs and open-source coding assistants on your laptop (not just cloud/enterprise SaaS). Specifically: Which model(s) are you running (e.g., Ollama, LM Studio, or others) and which open-source coding assistant/integration (for example, a VS Code plugin) you’re using? What laptop hardware do you have (CPU, GPU/NPU, memory, whether discrete GPU or integrated, OS) and how it performs for your workflow? What kinds of tasks you use it for (code completion, refactoring, debugging, code review) and how reliable it is (what works well / where it falls short). I'm conducting my own investigation, which I will be happy to share as well when over. Thanks! Andrea.