Bop Spotter
806 by walz | 176 comments on Hacker News.
ads
New best story on Hacker News: Show HN: I Wrote a Book on Java
Show HN: I Wrote a Book on Java
520 by goostavos | 139 comments on Hacker News.
https://ift.tt/Wp6KGFV... This book is a distillation of everything I’ve learned about what effective development looks like in Java (so far!). It's about how to organize programs around data "as plain data" and the surprisingly benefits that emerge when we do. Programs that are built around the data they manage tend to be simpler, smaller, and significantly easier understand. Java has changed radically over the last several years. It has picked up all kinds of new language features which support data oriented programming (records, pattern matching, `with` expressions, sum and product types). However, this is not a book about tools. No amount of studying a screw-driver will teach you how to build a house. This book focuses on house building. We'll pick out a plot of land, lay a foundation, and build upon it house that can weather any storm. DoP is based around a very simple idea, and one people have been rediscovering since the dawn of computing, "representation is the essence of programming." When we do a really good job of capturing the data in our domain, the rest of the system tends to fall into place in a way which can feel like it’s writing itself. That's my elevator pitch! The book is currently in early access. I hope you check it out. I'd love to hear your feedback. You can get 50% off (thru October 9th) with code `mlkiehl` https://ift.tt/Wp6KGFV...
520 by goostavos | 139 comments on Hacker News.
https://ift.tt/Wp6KGFV... This book is a distillation of everything I’ve learned about what effective development looks like in Java (so far!). It's about how to organize programs around data "as plain data" and the surprisingly benefits that emerge when we do. Programs that are built around the data they manage tend to be simpler, smaller, and significantly easier understand. Java has changed radically over the last several years. It has picked up all kinds of new language features which support data oriented programming (records, pattern matching, `with` expressions, sum and product types). However, this is not a book about tools. No amount of studying a screw-driver will teach you how to build a house. This book focuses on house building. We'll pick out a plot of land, lay a foundation, and build upon it house that can weather any storm. DoP is based around a very simple idea, and one people have been rediscovering since the dawn of computing, "representation is the essence of programming." When we do a really good job of capturing the data in our domain, the rest of the system tends to fall into place in a way which can feel like it’s writing itself. That's my elevator pitch! The book is currently in early access. I hope you check it out. I'd love to hear your feedback. You can get 50% off (thru October 9th) with code `mlkiehl` https://ift.tt/Wp6KGFV...
New best story on Hacker News: Show HN: iFixit created a new USB-C, repairable soldering system
Show HN: iFixit created a new USB-C, repairable soldering system
636 by kwiens | 314 comments on Hacker News.
After years of making screwdrivers and teaching people to repair electronics, we just made our first electronic tool. It's been a journey for us to build while hewing to our repairable principles. We're really excited about it. It's a USB-C powered soldering iron and smart battery power hub. Super repairable, of course. Our goal is to make soldering so easy everyone can do it: https://ift.tt/OYWzJcy We didn’t want to make just another iron, so we spent years sweating the details and crafting something that met our exacting standards. This is a high-performance iron: it can output 100W of heat, gets to soldering temperature in under 5 seconds, and automatically cools off when you set it down. The accelerometer detects when you pick it up and heats it back up. Keeping the iron at a lower temperature while you’re not soldering shouold prolong the life of the tip. What’s the difference between this iron and other USB-C irons on the market? Here’s a quick list: Higher power (our Smart Iron is 100W, competitors max out at 60W over USB-C, 88W over DC Supply) Heat-resistant storage cap (you just have to try this out, it’s a real game changer in day-to-day use) Polished user experience A warranty and a local company to talk to (I can’t find any contact information for Miniware) Comfier / more natural grip Shorter soldering tip length No-tangle, heat-resistant cable Locking ring on the cable, so it can’t snag and get disconnected (this happens to me all the time on other irons) More intuitive settings, either on the Power Station or on the computer We used Web Serial https://ift.tt/T2WlIzM for the interface, which is only supported in Chromium browsers. The biggest bummer with that is that no mobile browsers support it, yet. Hopefully that changes soon. Hardware is hard! It's been a journey for us. Happy to answer any questions about how we made it. Schematics and repair information are online here: https://ift.tt/hRbuWPg...
636 by kwiens | 314 comments on Hacker News.
After years of making screwdrivers and teaching people to repair electronics, we just made our first electronic tool. It's been a journey for us to build while hewing to our repairable principles. We're really excited about it. It's a USB-C powered soldering iron and smart battery power hub. Super repairable, of course. Our goal is to make soldering so easy everyone can do it: https://ift.tt/OYWzJcy We didn’t want to make just another iron, so we spent years sweating the details and crafting something that met our exacting standards. This is a high-performance iron: it can output 100W of heat, gets to soldering temperature in under 5 seconds, and automatically cools off when you set it down. The accelerometer detects when you pick it up and heats it back up. Keeping the iron at a lower temperature while you’re not soldering shouold prolong the life of the tip. What’s the difference between this iron and other USB-C irons on the market? Here’s a quick list: Higher power (our Smart Iron is 100W, competitors max out at 60W over USB-C, 88W over DC Supply) Heat-resistant storage cap (you just have to try this out, it’s a real game changer in day-to-day use) Polished user experience A warranty and a local company to talk to (I can’t find any contact information for Miniware) Comfier / more natural grip Shorter soldering tip length No-tangle, heat-resistant cable Locking ring on the cable, so it can’t snag and get disconnected (this happens to me all the time on other irons) More intuitive settings, either on the Power Station or on the computer We used Web Serial https://ift.tt/T2WlIzM for the interface, which is only supported in Chromium browsers. The biggest bummer with that is that no mobile browsers support it, yet. Hopefully that changes soon. Hardware is hard! It's been a journey for us. Happy to answer any questions about how we made it. Schematics and repair information are online here: https://ift.tt/hRbuWPg...
New best story on Hacker News: Ask HN: Why is Pave legal?
Ask HN: Why is Pave legal?
645 by nowyoudont | 244 comments on Hacker News.
If you haven't heard of it, Pave is a YC-backed startup that helps startups with compensation. I can't actually access the system so I'm speaking from hearsay and what's information on public parts of their website. The way I understand it works is that you connect Pave to your HR and Payroll systems, they take the data about who you employ and how much you pay them, combine it with all their other companies, and give companies a collective breakdown of compensation ranges. My question is, isn't this specifically anti-competitive wage fixing? This seems exactly like RealPage but for employee compensation. As far as I know, colluding on wages like this is illegal. Is there something about the company that I'm missing?
645 by nowyoudont | 244 comments on Hacker News.
If you haven't heard of it, Pave is a YC-backed startup that helps startups with compensation. I can't actually access the system so I'm speaking from hearsay and what's information on public parts of their website. The way I understand it works is that you connect Pave to your HR and Payroll systems, they take the data about who you employ and how much you pay them, combine it with all their other companies, and give companies a collective breakdown of compensation ranges. My question is, isn't this specifically anti-competitive wage fixing? This seems exactly like RealPage but for employee compensation. As far as I know, colluding on wages like this is illegal. Is there something about the company that I'm missing?
New best story on Hacker News: Show HN: Infinity – Realistic AI characters that can speak
Show HN: Infinity – Realistic AI characters that can speak
468 by lcolucci | 292 comments on Hacker News.
Hey HN, this is Lina, Andrew, and Sidney from Infinity AI ( https://infinity.ai/ ). We've trained our own foundation video model focused on people. As far as we know, this is the first time someone has trained a video diffusion transformer that’s driven by audio input. This is cool because it allows for expressive, realistic-looking characters that actually speak. Here’s a blog with a bunch of examples: https://ift.tt/cjzy8Ms If you want to try it out, you can either (1) go to https://ift.tt/Sdmgbni , or (2) post a comment in this thread describing a character and we’ll generate a video for you and reply with a link. For example: “Mona Lisa saying ‘what the heck are you smiling at?’”: https://bit.ly/3z8l1TM “A 3D pixar-style gnome with a pointy red hat reciting the Declaration of Independence”: https://bit.ly/3XzpTdS “Elon Musk singing Fly Me To The Moon by Sinatra”: https://bit.ly/47jyC7C Our tool at Infinity allows creators to type out a script with what they want their characters to say (and eventually, what they want their characters to do) and get a video out. We’ve trained for about 11 GPU years (~$500k) so far and our model recently started getting good results, so we wanted to share it here. We are still actively training. We had trouble creating videos of good characters with existing AI tools. Generative AI video models (like Runway and Luma) don’t allow characters to speak. And talking avatar companies (like HeyGen and Synthesia) just do lip syncing on top of the previously recorded videos. This means you often get facial expressions and gestures that don’t make sense with the audio, resulting in the “uncanny” look you can’t quite put your finger on. See blog. When we started Infinity, our V1 model took the lip syncing approach. In addition to mismatched gestures, this method had many limitations, including a finite library of actors (we had to fine-tune a model for each one with existing video footage) and an inability to animate imaginary characters. To address these limitations in V2, we decided to train an end-to-end video diffusion transformer model that takes in a single image, audio, and other conditioning signals and outputs video. We believe this end-to-end approach is the best way to capture the full complexity and nuances of human motion and emotion. One drawback of our approach is that the model is slow despite using rectified flow (2-4x speed up) and a 3D VAE embedding layer (2-5x speed up). Here are a few things the model does surprisingly well on: (1) it can handle multiple languages, (2) it has learned some physics (e.g. it generates earrings that dangle properly and infers a matching pair on the other ear), (3) it can animate diverse types of images (paintings, sculptures, etc) despite not being trained on those, and (4) it can handle singing. See blog. Here are some failure modes of the model: (1) it cannot handle animals (only humanoid images), (2) it often inserts hands into the frame (very annoying and distracting), (3) it’s not robust on cartoons, and (4) it can distort people’s identities (noticeable on well-known figures). See blog. Try the model here: https://ift.tt/Sdmgbni We’d love to hear what you think!
468 by lcolucci | 292 comments on Hacker News.
Hey HN, this is Lina, Andrew, and Sidney from Infinity AI ( https://infinity.ai/ ). We've trained our own foundation video model focused on people. As far as we know, this is the first time someone has trained a video diffusion transformer that’s driven by audio input. This is cool because it allows for expressive, realistic-looking characters that actually speak. Here’s a blog with a bunch of examples: https://ift.tt/cjzy8Ms If you want to try it out, you can either (1) go to https://ift.tt/Sdmgbni , or (2) post a comment in this thread describing a character and we’ll generate a video for you and reply with a link. For example: “Mona Lisa saying ‘what the heck are you smiling at?’”: https://bit.ly/3z8l1TM “A 3D pixar-style gnome with a pointy red hat reciting the Declaration of Independence”: https://bit.ly/3XzpTdS “Elon Musk singing Fly Me To The Moon by Sinatra”: https://bit.ly/47jyC7C Our tool at Infinity allows creators to type out a script with what they want their characters to say (and eventually, what they want their characters to do) and get a video out. We’ve trained for about 11 GPU years (~$500k) so far and our model recently started getting good results, so we wanted to share it here. We are still actively training. We had trouble creating videos of good characters with existing AI tools. Generative AI video models (like Runway and Luma) don’t allow characters to speak. And talking avatar companies (like HeyGen and Synthesia) just do lip syncing on top of the previously recorded videos. This means you often get facial expressions and gestures that don’t make sense with the audio, resulting in the “uncanny” look you can’t quite put your finger on. See blog. When we started Infinity, our V1 model took the lip syncing approach. In addition to mismatched gestures, this method had many limitations, including a finite library of actors (we had to fine-tune a model for each one with existing video footage) and an inability to animate imaginary characters. To address these limitations in V2, we decided to train an end-to-end video diffusion transformer model that takes in a single image, audio, and other conditioning signals and outputs video. We believe this end-to-end approach is the best way to capture the full complexity and nuances of human motion and emotion. One drawback of our approach is that the model is slow despite using rectified flow (2-4x speed up) and a 3D VAE embedding layer (2-5x speed up). Here are a few things the model does surprisingly well on: (1) it can handle multiple languages, (2) it has learned some physics (e.g. it generates earrings that dangle properly and infers a matching pair on the other ear), (3) it can animate diverse types of images (paintings, sculptures, etc) despite not being trained on those, and (4) it can handle singing. See blog. Here are some failure modes of the model: (1) it cannot handle animals (only humanoid images), (2) it often inserts hands into the frame (very annoying and distracting), (3) it’s not robust on cartoons, and (4) it can distort people’s identities (noticeable on well-known figures). See blog. Try the model here: https://ift.tt/Sdmgbni We’d love to hear what you think!
New best story on Hacker News: Tell HN: Burnout is bad to your brain, take care
Tell HN: Burnout is bad to your brain, take care
517 by tuyguntn | 208 comments on Hacker News.
I am depressed and burned out for quite some time already, unfortunately my brain still couldn't recover from it. If I summarize the impact of burnout to my brain: - Before: I could learn things pretty quickly, come up with solutions to the problems, even be able to see common patterns and see bigger underlying problems - After: can't learn, can't work, can't remember, can't see solutions for trivial problems (e.g. if your shirt is wet, you can change it, but I stare at it thinking when it is going to get dried up) Take care of your mental health
517 by tuyguntn | 208 comments on Hacker News.
I am depressed and burned out for quite some time already, unfortunately my brain still couldn't recover from it. If I summarize the impact of burnout to my brain: - Before: I could learn things pretty quickly, come up with solutions to the problems, even be able to see common patterns and see bigger underlying problems - After: can't learn, can't work, can't remember, can't see solutions for trivial problems (e.g. if your shirt is wet, you can change it, but I stare at it thinking when it is going to get dried up) Take care of your mental health
Subscribe to:
Posts (Atom)