This is 100% the kind of thing I was hoping for when daydreaming "what will the internet be like 20 years from now" growing up. Cool design, hilariously overpowered backend to do something basic (but do it so well), and 0 of the corporate feeling or ad apocalypse. Awesome stuff!
Hahah, so true! One clarification, though: while I'm sure the backend is overpowered (Cloudflare Pages, easily handling the Hug of Death from HN), but in this case it's only serving static resources. All the hard work is done by the hilariously overpowered devices we carry in our pockets or use for everything but serious work ;)
Oh 100%, the hosting is more than fitting. I had meant to refer to the app's non-UI code (machine learning to place meme glasses)! Apologies for the lack of clarity.
if (looping.mode === "off") {
// If you waited for a day, you deserve to see this workaround...
// Since there is no way to not loop a gif using gifwrap,
// let's just put a reeeeaaaaallly long delay after the last frame.
return 8640000;
}
I got an error after clicking the "Deal with it" button.
Graph successfully started running.
Uncaught (in promise) Error: Unsupported MIME type: application/xml
at Et.a (gif.worker-WYu6DU4i.js:23:56331)
at Et.zt (gif.worker-WYu6DU4i.js:23:78271)
This might be caused by my workplace enforcing the Okta chrome plugin :(
I interviewed a developer once who was super junior on paper but had a side project of a fully featured desktop anime episode to watch/watched tracker with lots of library feature features.
Hire.
Interviewed another Dev who made arcade sticks as a side project.
Hired.
You can't teach passion. Hire all that passionate people you can. Tech stacks are irrelevant compared to the love of building things.
I was not hired once because I didn't have React experience, despite having years of both Vue and Angular and having led teams building non-trivial apps in both. IME focusing on such a minor detail like that means either a) you're going to be so pressured to get stuff out of the door they can't handle slightly lower productivity for a month while you learn the different syntax, and/or b) the person hiring you isn't technical enough to know this is a minor detail.
I actually quite like it when this happens from the candidate side of the table. I don't want to work anywhere that is so short term focused on "you need to have X years of experience with this exact language/framework or we won't even consider you." It saves us both a lot of time by realizing our values are clearly not aligned early on in the process.
I landed my first software job when I was 18 years old. I didn't have a degree or any professional experience in their tech stack, and the recruiter told me later that I was hired because of "my passion for Legos and foreign languages" (I was learning Swahili at the time, and one of my interviewers happened to be a former volunteer in the Peace Corps, who conducted most of the interview in Swahili).
I had a great experience at that company while I was in college, which also launched me into an exciting professional career. All of it wouldn't have happened without people like you.
Sorry to be the devil's advocate (just call me Beelzebub ... sung like Queen [1])
There may be other factors. Hiring is complex. They could have judged tbis fairly and they may have had a better candidate. Having a project like this should probabilistically increase your chances.
Thank you! Yeah, I totally understand and not taking it against them. It's impossible to always get the right impression and fit for the candidate (even if it was based on a few interviews and trial, like here). So I figured I'd make the best of the situation and share the project with others! Hoping to make some new connections this way, and maybe find an even better opportunity!
Only recently started using Discord, and joined various “servers”, which seem as basically just groups with chats. When I was prompted to create my own “server”, at first was hesitant thinking “whoa, I don’t want them to provision a whole server for me, that can’t be cheap”, then realized it has nothing to do with actual servers.
In the Discord API, "servers" are called "guilds" which I assume it was the original name for them when they started developing the platform. Don't know why they decided to change it.
In the dark ages of the internet, you had to host a server in things like Ventrilo[1], which was the thing Discord tried and succeeded at disrupting. Since now you can create a "server" for talking with friends (and friends of friends) without being technical.
Or, even more broad, someone just thought it might and implemented it, irrespective of whether it actually changed anything (discernible from background growth) or not
Even as a slack user I’m confused. Slack emoji should still be usable in low resolutions and preferably have transparent background. None of the examples seem like emoji.
Would that even make sense, to “generate emojis” ? They are defined by the Unicode emoji specs like here: https://unicode.org/reports/tr51/
I guess it would need to generate all possible emojis given by the spec? Supposing it did generate all emojis, is there even a way to “install” it for example iOS ?
Well, I'm using their tool in mine (hint is in one of the Easter Eggs... or just look at what's powering the product analytics ;)). So it's an ouroboros of open-source :D
Thanks for sharing this. I love tiny projects like this, especially if they’re over engineered.
The job market is rough. I have no doubt you were considered, and they were interested, but everyone is giving their all right now. Someone likely submitted something even cooler, somehow.
To clarify - I wrote this tool as my own initiative. On top of the "normal" process (that involved multiple interviews and a day paid trial). I was really excited about the opportunity and wanted to go the extra mile.
Dude I’m often on interview panels and some of the other interviewers are insufferable.
When they give feedback, they love getting on their soapbox and critiquing others as if they were Olympic judges or something.
I’ve had to hold myself back from saying bro we wouldn’t hire you yourself if this criteria actually had to be met.
Some hiring managers require all this elaborate prep work and it’s such bullshit, imo it’s a total cop out to have people do so much work as part of the interview process.
It’s the lazy way of evaluating someone versus thoughtfully putting together a good hiring process and conducting effective interviews.
I’ve had the same experience many times where the people I’m interviewing candidates with wouldn’t get hired under the same conditions. They tend to be early hires who have stuck around, and they’re super protective and defensive.
Rather than welcome people to the team to learn from them and grow the team, they criticize and reject people like crazy. It’s very reactive rather than proactive behaviour, and it’s a bad sign to have it at a company.
That's what kept me going while adding more features :D Just the joy of creating something so useless, but still capable of bringing smile to my (and hopefully your too!) faces. Glad you love it!
Glasses can now be set to any color you want. Only applies to the "Classic" style, though. Unless I find the other glasses in SVG format as well.
As a nice side-effect, the glasses can now also be set as semi-transparent!
What's interesting is that the moment that you introduce a feature that allows for sharing of the images, and you need a backend to host the images. You enter a world of pain where you inevitably will end up hosting illegal material, and it turns from a fun project to a serious project.
Yeah, that's why I'm keeping everything client-side. There's no backend. This is hosted on Cloudflare Pages, everything on the backend is static.
Definitely not looking to host any generated images.
On Mobile Safari at least, you can press and hold the image to share. I imagine there's probably an equivalent gesture on Mobile Chrome. I spent considerable time many years ago trying to figure out how to indicate to users that they could do this. ("press and hold to share", with a progress indicator) Results were mixed.
Fortunately, these days if you wanted to make it more obvious you could hook up a regular Share button to navigator.share() API and pass it your image blob.
You can choose different styles of glasses. But, yeah, they're all black-ish. Definitely open to different colors/styles! I've created a new issue with some possible solutions and will look into it: https://github.com/klimeryk/dealwithit/issues/33 (but PRs are welcome too!)
Used in WhatsApp on iOS the last frame didn’t stay, it seems to stop instantly and/or loop back to too early. IIRC I used 20 frames, 250ms between frame and 2500ms for the last one. Seems to have a similar problem on Telegram.
Otherwise incredible. The customization options are much appreciated.
This is a known problem for Telegram at least. It cuts off the last frame regardless of that frame's duration. I work around this by adding a single identical frame for 1ms at the end of the GIF.
It's great. I wish you could paste an image URL too, that would be slick.
Oh, and it would be cool to have an option for the meme text to go under the image too, and to appear in the final frame of motion for the glasses. For now, I can DEAL WITH IT!
If *after* generating a gif you change the gif size (bigger in my case) the new gif has the sunglasses appear in the same place. But if I change the size first and then generate everything work fine.
I immediately knew what you meant, because I ran into the bug while testing earlier today and apparently forgot to file it. So thanks for reporting - I've filed it now properly and should have it fixed soon.
This is actually really neat. I like the configurability. Is there an option to change glasses size that I missed? I think that's the only thing I noticed I couldn't do when playing around with it.
* Found it, there is a small indicator in bottom right of glasses that allows dragging out to resize. Thanks all!
On mobile it should work too (at least when I tested on recent iPhones). The drag handle is a bit small, though. Just aim for bottom right corner of the glasses.
Minor UX notes:
- clicking the header doesn't navigate back to the "home" screen
- singular page history (so the back button doesn't take you back to the previous page state)
Combined it made it not intuitive for me how to "get rid" of the selection I'd created (I eventually figured it out, but the previous two points were what I intuitively tried first)
> clicking the header doesn't navigate back to the "home" screen - singular page history (so the back button doesn't take you back to the previous page state)
Could you describe in more detail this? I'm not sure I agree that state changes should be pushed to browser history. In my experience this usually leads to confusing user experience. But that might be also just years of conditioning and I'm missing some best practices. So happy to learn more.
My natural instinct to return to the "default" view (as in, how the page looked the very first time before I interacted with anything) was to click the header text at the top (that says "deal with it"). Same as you click the "Hacker News" text at the top to return to the default home page view.
Just as something to be a quick "oops, didn't want to use that image, I want to go back to how the site is when I first visited it", and essentially wipe the state.
The history stuff is secondary, since that was my second instinct after trying to find and hit a "home" button, just from my personal UX perspective.
Infinite spinner for me: Stuck at "Loading AI models for face detection..." forever. I'd want to fix that before sharing it with recruiters as an example project.
Thank you! Yeah, that looks like the pipeline from mediapipe (used for ML face detection) has troubles initializing the WebGL context. I don't see immediately any settings for disabling WebGL in Safari, but I'm sure there's some feature flags for that. Do you remember touching any settings related to this? I'm wondering if it's related to some browser settings, an extension (?) or something else.
Edit: tested on M2 and Safari 18.0 and I cannot reproduce it. Updating now to 15.0.1 to see if that makes a difference.
I went into Safari Settings and, under Feature Flags, turned on "Allow WebGL in Web Workers" and the site now works. I don't recall ever turning that off or messing with any other feature flags, so I'm guessing that this necessary feature is off by default in at least some versions of Safari. Web development must be so fun, I feel for you.
Yup, shades rotation is on the roadmap: https://github.com/klimeryk/dealwithit/issues/30. Hope to get it done some time today/tomorrow. Most pieces are ready, just need to figure out a good UX for the rotate handle. Worst case scenario, it can be also an input field.
Background photo rotation - could you share what would be the use case? The output would be a bit weird, since it has to be square, so either it needs to be cropped (so now there needs to be crop feature ;)) or there will be blank spaces. Honest question, I might be missing some interesting use case! But hopefully, rotating the shades would solve for most of these.
Hahaha, roger that! I mean, it would be in line with the whole premise of the tool... I should implement it so that the whole tool/page rotates around the glasses XD
This is great! I always joke with my girlfriend about people in billboards or whatever needing deal with it glasses and will likely use this regularly haha Thank you!
It's already using https://ai.google.dev/edge/mediapipe/solutions/vision/face_d... for detecting the eye (and nose) position for the perfect fit ;) The face detection should be done locally, so hopefully Google is not fine-tuning their models based on this.
It uses face-api.js to find the face, and then move the sunglasses over it. It's about a 5 meg model, so it's pretty slow to load. You can customize with a URL, or drag and drop an image on. Resizing the browser also moves the glasses around.
Very happy to see someone take the idea way way way further!
Heck yeah! Love the touches like resize handling or rotation support! Contributions are more than welcome... hint hint ;)
> It uses face-api.js to find the face
Yeah, I'm using Google AI's Face Detector [1]. There's Tensorflow's Face Landmarks Detection [2] that looked most promising and accurate. But it had two bugs [3][4] that are blockers. The first one got fixed recently, but the other one is still pending.
I think we're thinking of the same thing here: https://github.com/klimeryk/dealwithit/issues/22 That's part of the reason I made the list of glasses draggable/sortable. Because I want to add the ability to specify which glasses should appear together and which ones should appear in a sequence.
So many features to implement, so this one did not make it for the launch, but it's definitely on the roadmap!