Sunday, March 31, 2024

Saturday, March 30, 2024

New top story on Hacker News: Ask HN: Going from CTO to Developer?

Ask HN: Going from CTO to Developer?
4 by thatguyagain | 5 comments on Hacker News.
Let's say you work as a CTO at a failing startup, and you are tired of all the responsibilities, management, etc, and you just want to go back to being a productive developer and write code again. Will this be perceived as a stupid career move or will people understand? Is it a bad move? Asking for a friend.

Friday, March 29, 2024

Thursday, March 28, 2024

Wednesday, March 27, 2024

New top story on Hacker News: Show HN: I built an interactive plotter art exhibit for SIGGRAPH

Show HN: I built an interactive plotter art exhibit for SIGGRAPH
11 by cosiiine | 0 comments on Hacker News.
I'm enthralled with using pen plotters to make generative art. Last August at SIGGRAPH, I built an interactive experience for others to see how code can be used to make visual art. The linked blog post is my trials and tribulations of linking a MIDI controller to one of these algorithms and sending its output to a plotter, so that people may witness the end-to-end experience.

Tuesday, March 26, 2024

Monday, March 25, 2024

Sunday, March 24, 2024

Saturday, March 23, 2024

Friday, March 22, 2024

Thursday, March 21, 2024

New top story on Hacker News: Launch HN: Soundry AI (YC W24) – Music sample generator for music creators

Launch HN: Soundry AI (YC W24) – Music sample generator for music creators
18 by kantthpel | 9 comments on Hacker News.
Hi everyone! We’re Mark, Justin, and Diandre of Soundry AI ( https://soundry.ai/ ). We provide generative AI tools for musicians, including text-to-sound and infinite sample packs. We (Mark and Justin) started writing music together a few years ago but felt limited in our ability to create anything that we were proud of. Modern music production is highly technical and requires knowledge of sound design, tracking, arrangement, mixing, mastering, and digital signal processing. Even with our technical backgrounds (in AI and cloud computing respectively), we struggled to learn what we needed to know. The emergence of latent diffusion models was a turning point for us just like many others in tech. All of a sudden it was possible to leverage AI to create beautiful art. After meeting our cofounder Diandre (half of the DJ duo Bandlez and expert music producer), we formed a team to apply generative AI to music production. We began by focusing on generating music samples rather than full songs. Focusing on samples gave us several advantages, but the biggest one was the ability to build and train our custom models very quickly due to the small required length of the generated audio (typically 2-10 seconds). Conveniently, our early text-to-sample model also fit well within many existing music producers’ workflows which often involve heavy use of music samples. We ran into several challenges when creating our text-to-sound model. The first was that we began by training our latent transformer (similar to Open AI’s Sora) using off-the-shelf audio autoencoders (like Meta’s Encodec) and text embedders (like Google’s T5). The domain gap between the data used to train these off-the-shelf models and sample data was much greater than we expected, which caused us to incorrectly attribute blame for issues in the three model components (latent transformer, autoencoder, and embedder) during development. To see how musicians can use our text-to-sound generator to write music, you can see our text-to-sound demo below: https://www.youtube.com/watch?v=MT3k4VV5yrs&ab_channel=Sound... The second issue we experienced was more on the product design side. When we spoke with our users in-depth we learned that novice music producers had no idea what to type into the prompt box, and expert music producers felt that our model’s output wasn’t always what they had in mind when they typed in their prompt. It turns out that text is much better at specifying the contents of visual art than music. This particular issue is what led us to our new product: the Infinite Sample Pack. The Infinite Sample Pack does something rather unconventional: prompting with audio rather than text. Rather than requiring you to type out a prompt and specify many parameters, all you need to do is click a button to receive new samples. Each time you select a sound, our system embeds “prompt samples” as input to our model which then creates infinite variations. By limiting the number of possible outputs we’re able to hide inference latency by pre-computing lots of samples ahead of time. This new approach has seen much wider adoption and so this month we’ll be opening the system up so that everyone can create Infinite Sample Packs of their very own! To compare the workflow of the two products, you can check out our new demo using the Infinite Sample Pack: https://www.youtube.com/watch?v=BqYhGipZCDY&ab_channel=Sound... Overall, our founding principle is to start by asking the question: "what do musicians actually want?" Meta's open sourcing of MusicGen has resulted in many interchangeable text-to-music products, but ours is embraced by musicians. By constantly having an open dialog with our users we’ve been able to satisfy many needs including the ability to specify BPM and key, including one-shot instrument samples (so musicians can write their own melodies), and adding drag-and-drop support for digital audio workstations via our desktop app and VST. To hear some of the awesome songs made with our product, take a listen to our community showcases below! https://ift.tt/1gE5Art We hope you enjoy our tool, and look forward to discussion in the comments

Wednesday, March 20, 2024

Tuesday, March 19, 2024

Monday, March 18, 2024

Sunday, March 17, 2024

New top story on Hacker News: Show HN: Interactive Smartlog VSCode Extension – An Interactive Git GUI

Show HN: Interactive Smartlog VSCode Extension – An Interactive Git GUI
13 by tnesbitt210 | 1 comments on Hacker News.
Interactive Smartlog is a graphical VSCode extension that presents a simplified view of the Git log, directly highlighting the branches and commits that are most relevant to your current work. And it's not just a visual tool — it's fully interactive, allowing you to add/switch/remove branches, stage/unstage files, and manage commits directly from the GUI. This tool draws inspiration from Meta's Interactive Smartlog built for the Sapling source control system, and I've adapted it to work with Git. Transitioning the functionality from Sapling to Git wasn't just about a one-to-one feature transfer; it involved changing how data is queried & presented, as well as introducing UI interactions for several Git concepts (like branches, staging/unstaging changes, etc) which are not present in the Sapling source control system. Originally a personal project to enhance my own workflow, I've published the extension on the VSCode marketplace for anyone who would like to use it. I'm keen to hear your feedback and suggestions, as community input is invaluable in shaping its future updates.

Saturday, March 16, 2024

Friday, March 15, 2024

Thursday, March 14, 2024

Wednesday, March 13, 2024

Tuesday, March 12, 2024

Monday, March 11, 2024

New top story on Hacker News: Who uses Google TPUs for inference in production?

Who uses Google TPUs for inference in production?
17 by arthurdelerue | 2 comments on Hacker News.
I am really puzzled by TPUs. I've been reading everywhere that TPUs are powerful and a great alternative to NVIDIA. I have been playing with TPUs for a couple of months now, and to be honest I don't understand how can people use them in production for inference: - almost no resources online showing how to run modern generative models like Mistral, Yi 34B, etc. on TPUs - poor compatibility between JAX and Pytorch - very hard to understand the memory consumption of the TPU chips (no nvidia-smi equivalent) - rotating IP addresses on TPU VMs - almost impossible to get my hands on a TPU v5 Is it only me? Or did I miss something? I totally understand that TPUs can be useful for training though.

Sunday, March 10, 2024

Saturday, March 9, 2024

Friday, March 8, 2024

New top story on Hacker News: Show HN: My first software project – a website to set goals and track progress

Show HN: My first software project – a website to set goals and track progress
3 by eastoeast | 0 comments on Hacker News.
Two years ago, I started building this site that allows people to document their learning and progress in real time. The idea is: as you learn new things, you document your progress piece by piece, creating a collection of failures, breakthroughs, and knowledge. Along the way, your friends can cheer you on, and the community can give you tips and feedback. Over time, we'll create a public collection on how different problems were solved. With each progress, the site prompts you to reflect on questions like, "If you could go back in time, what do you wish you had known?" This was my first web dev project, and everything was self-taught. It's been both a great passion and a significant learning experience! All feedback is welcome, big or small. I hope you enjoy it and find it useful. Stack: Angular, Python/Postgres, AWS, PWA service workers for notifications.

Thursday, March 7, 2024

Wednesday, March 6, 2024

Tuesday, March 5, 2024

Monday, March 4, 2024

Sunday, March 3, 2024

Saturday, March 2, 2024

Friday, March 1, 2024

New top story on Hacker News: Ask HN: Who wants to be hired? (March 2024)

Ask HN: Who wants to be hired? (March 2024)
18 by whoishiring | 74 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: Résumé/CV: Email: Readers: please only email these addresses to discuss work opportunities.

Popular Posts

Recent Posts

Unordered List

Text Widget

Blog Archive

Search This Blog

Powered by Blogger.