This page is a feed of all my posts in reverse chronological order. You can subscribe to this feed in your favourite feed reader through the icon above. You can also get a weekly digest of all of my posts via email by subscribing here:
I can't imagine I'm the first to try this, but new hobby acquired:
I ran the ones below on the spot and it was quite fun. Before this, whenever I visited the British Museum (a few times a year), I didn't really give most of those statues a second glance.
An exercise for the reader (this one's interesting because they put a reference of what it could have look like if it were complete based on a different statue):
And another bust of good old Caesar (might be interesting as there's so much reference material, and it's so broken):
Try it and have fun! I'll try another batch the next time I go.
I wrote a short article about a trick for editing the text in HTML text nodes with only CSS. This is one of those articles where the goal is just to share something that I learned or discovered, that someone might benefit from, and the primary mode of finding this content is through a search engine.
It doesn't quite make sense for this to be an "article" in the way that I use that word (a long-form post bound in time that people follow/subscribe to) so I might eventually turn all these guide-type posts into wiki-notes, so they can exist as non-time-bound living documents.
For a long time I've been interested in the idea of creating a digital twin of yourself. I've tried this in the past with prompt completion trained on many years of my chat data, but it was always just a bit too inaccurate and hollow.
I also take a lot of notes, and have been taking more and more recently (a subset of these are public, like this post you're reading right now). I mentioned recently that I really think that prompt completion on top of embeddings is going to be a game-changer here.
You probably already know about prompt completion (you give it some text and it continues it like auto-complete on steroids) which underpins GPT-3, ChatGPT, etc. However, it turns out that a lot of people aren't familiar with embeddings. In a nutshell, you can turn blocks of text into high-dimensional vectors. You can then do interesting things in this vector space, for example find the distance between two vectors to reason about their similarity. CohereAI wrote an ELI5 thread about embeddings if you want to learn more.
None of this is particularly new -- you might remember StyleGAN some years ago which is what first really made this concept of a latent space really click for me, because it's so visual. You could generate a random vector that can get decoded to a random face or other random things, and you could "morph" between faces in this space by moving in this high-dimensional space. You could also find "directions" in this space (think PCA), to e.g. make a slider that increases your age when you move in that direction, while keeping other features relatively unchanging, or you could find the "femininity" direction and make someone masculine look more feminine, or a "smiling amount" direction, etc.
The equivalent of embedding text into a latent space is like when you have an image and you want to hill-climb to find a vector that generates the closest possible image to that (that you can then manipulate). I experimented with this using my profile picture (this was in August 2021, things have gotten much better since!):
Today, I discovered two new projects in this space. The first was specifically for using embeddings for search which is not that interesting but, to be fair, is what it's for. In the comments of that project on HackerNews, the second project was posted by its creator which goes a step further and puts a chat interface on top of the search, which is the exact approach I talked about before and think has a lot of potential!
Soon, I would like to be able to have a conversation with myself to organise my thoughts and maybe even engage in some self-therapy. If the conversational part of the pipeline was also fine-tuned on personal data, this could be the true starting point to creating digital twins that replace us and even outlive us!
Some weeks ago I built the "Muslim ChatGPT". From user feedback, I very quickly realised that this is one use case that absolutely won't work with generative AI. Thinking about it some more, I came to a soft conclusion that at the moment there are a set of use cases that are overall not well suited.
There's a class of computational problems with NP complexity. What this means is not important except that these are hard to solve but easy to verify. For example, it's hard to solve a Sudoku puzzle, but easy to check that it's correct.
Similarly, I think that there's a space of GPT use cases where the results can be verified with variable difficulty, and where having correct results is of variable importance. Here's an attempt to illustrate what some of these could be:
The top right here (high difficulty to verify, but important that the results are correct) is a "danger zone", and also where deen.ai lives. I think that as large language models become more reliable, the risks will be mitigated somewhat, but in general not enough, as they can still be confidently wrong.
In the bottom, the use cases are much less risky, because you can easily check them, but the product might still be pretty useless if the answers are consistently wrong. For example, we know that ChatGPT still tends to be pretty bad at maths and things that require multiple steps of thought, but crucially: we can tell.
The top left is kind of a weird area. I can't really think of use cases where the results are difficult to verify, but also you don't really care if they're super correct or not. The closest use case I could think of was just doing some exploratory research about a field you know nothing about, to make different parts of it more concrete, such that you can then go and google the right words to find out more from sources with high verifiability.
I think most viable use cases today live in the bottom and towards the left, but the most exciting use cases live in the top right.
Another important spectrum is when your use case relies on more on recall versus synthesis. Asking for the capital of France is recall, while generating a poem is synthesis. Generating a poem using the names of all cities in France is somewhere in between.
At the moment, LLMs are clearly better at synthesis than recall, and it makes sense when you consider how they work. Indeed, most of the downfalls come from when they're a bit too loose with making stuff up.
Personally, I think that recall use cases are very under-explored at the moment, and have a lot of potential. This contrast is painted quite well when comparing two recent posts on HN. The first is about someone who trained nanoGPT on their personal journal here and the output was not great. Similarly, Amarbot used GPT-J fine-tuning and the results were also hit and miss.
The second uses GPT-3 Embeddings for searching a knowledge base, combined with completion to have a conversational interface with it here. This is brilliant! It solves the issues around needing the results to be as correct as possible, while still assisting you with them (e.g. if you wanted to ask for the nearest restaurants, they better actually exist)!
Somebody in the comments linked gpt_index so you can do this yourself, and I really think that this kind of architecture is the real magic dust that will revolutionise both search and discovery, and give search engines a run for their money.
Welp, looks like I'm a month late for the N-O-D-E Christmas Giveaway. You might be thinking "duh, Christmas is long gone", and I also found it weird that the deadline was the 31st of January, but it turns out that that was a mistake in the video and he corrected it in the comments.
Since I keep up with YouTube via RSS, I didn't see that comment until it was too late. I only thought to check again when my submission email bounced.
Oh well! At least it gave me a reason to finally write up my smart home setup! This also wasn't the first time that participating in N-O-D-E events really didn't work out for me -- in 2018 I participated in the N-O-D-E Secret Santa and sent some goodies over to the US, and really put some effort into it I remember. Unfortunately I never got anything back which was a little disappointing, but hey, maybe next time!
I've been planning to start this project for a while, as well as document the journey, but never really got around to it. I had a calendar reminder that tomorrow the N-O-D-E Christmas Giveaway closes, which finally gave me the kick in the butt needed to start this one! I also want to use this as an opportunity to create short-form videos on TikTok to learn more about it (in this case, documenting the journey). The project page is here.
Recently, people whose work I admire made me have to confront the "art not artist" dilemma once more. In this case, Nick Bostrom with racism, and Justin Roiland with domestic abuse.
Thinking about it, more generally, I guess it comes down to:
However, it makes me think about the question: what if an AI were to be in a similar situation? Done something good and also done something bad. The current vibe seems to be that AI is a "tool" and "guns don't kill people, people kill people". But once you assign agency to AI, it starts opening up unexplored questions I think.
For example, what if you clone an AI state, one goes on to kill, the other goes on to save lives, in what way is the other liable? It's a bit like the entanglement experiment that won the 2022 Nobel physics prize -- you're entangling across space (two forks of a mind) vs time (old "good" version of a celebrity vs new "bad" version of a celebrity) where all versions are equally capable of bad in theory. To what extent are versions of people connected, and their potential?
It also reminds me of the sci-fi story Accelerando by Charles Stross (which I recommend, and you can read online for free here) where different forks of humans can be liable for debts incurred by their forks.
On a related note, I was recently reading a section in Existential Physics by Sabine Hossenfelder titled "Free Will and Morals". Forgive the awful photos, but give it a read:
So it doesn't even have to be AI. If someone is criminally insane, they are no longer agents responsible for their own actions, but rather chaotic systems to be managed, just like you don't "blame" the weather for being bad, or a small child for making mistakes.
Then, what if in a sufficiently advanced society we could simply alter our memories or reprogram criminal intent away? Are we killing the undesirable version? The main reasons for punishment are retribution, incapacitation, deterrence, and rehabilitation, but is there research out there that has really thought about how this applies to AI?
There's a fifth reason that applies only to AI: Roko's Basilisk (warning: infohazard) but it's all connected, as I wonder what majority beliefs we hold today that future cultures will find morally reprehensible. It might be things like consuming animals or the treatment of non-human intelligence that is equivalent to or greater than humans by some metric. At least we can say that racism and domestic violence are pretty obviously bad though.
Twilio used to be a cool and trustworthy company. I remember when I was in uni, some CS students (I was not a CS student) built little SMS conversation trees like it was nothing, and suddenly SMS become something you could build things with as a hobby.
Over the past month, my view of Twilio has completely changed.
Ten days ago (Jan 19th) at around 7am UTC, I woke up to large charges to our business account from Twilio, as well as a series of auto-recharge emails and finally an account suspension email. These charges happened in the span of 3 minutes just before 5am UTC. My reaction at this point was confusion. We were part of Twilio's startup programme and I didn't expect any of our usage to surpass our startup credits at this stage.
I checked the Twilio dashboard and saw that there was a large influx of OTP verification requests from Myanmar numbers that were clearly automated. I could tell that they're automated because they came basically all at once, and mostly from the same IP address (in Palestine). At this point, I realised it was an attack. I could also see that this was some kind of app automation (rather than spamming the underlying API endpoint) as we were also getting app navigation events.
After we were suspended, the verifications failed, so the attack stopped. The attacker seemed to have manually tried a California IP after that some hours later, probably to check if they've been IP blocked, and it probably wasn't a physical phone (Android 7). Then they stopped.
I also saw that our account balance was more than £1.5k in the red (in addition to the charges to our bank account) and our account was suspended until we zero that balance. The timing could not have been worse as we were scheduled to have an important pitch to partners at a tier 1 VC firm. They could be trying the app out already and unable to get in as phone verification was confirmed broken.
We're on the lowest tier (as a startup) which means our support is limited to email. I immediately opened a ticket to inform Twilio that we were victims of a clear attack, and to ask Twilio for help in blocking these area codes, as we needed our account to be un-suspended ASAP. They took quite a long time to respond, so after some hours I went ahead and paid off the £1.5k balance in order for our account to be un-suspended, with the hope that they can refund us later.
I was scratching my head at what the possible motive of such an attack could be. I thought it must be denial of service, but couldn't think of a motive. We're not big enough for competitors to want to sabotage us, so I was expecting an email at any point from someone asking for bitcoin to stop attacking us, or a dodgy security company coming in and asking for money to prevent it. But Twilio sent an email saying that this is a case of toll fraud.
I recommend reading that article, but in essence, those numbers are premium numbers owned by the attacker, and every time Twilio sends them a verification SMS, they make money, and we foot the bill.
Twilio seemed to follow a set playbook that they use for these situations. Their documentation names a set of countries as the one where toll fraud numbers most likely come from and recommend are blocked (I suppose it's easy to get premium numbers there): Bangladesh, Sri-Lanka, Myanmar, Pakistan, Uzbekistan, Azerbaijan, Kyrgyzstan, and Nigeria.
I immediately went and blocked those area codes from our side, though Twilio also automatically blocked all countries except the US and the UK anyway, so it didn't really make a difference. Also, the attacker tried again using Indonesian numbers after that, so clearly a blocklist like that is not enough. Later I went and one by one selectively allowed only countries we actually serve.
Beyond this, Twilio's response was to try and do everything to blame this on us. They wash their hands of the responsibility to secure their own APIs, and instead the onus is on us to implement our own unreasonable security measures.
I told a friend about this, and through that friend found out that this is actually a very common problem that people have been having with Twilio, because Twilio dropped the ball. Apparently, out of all of those cases, we got pretty lucky (some people lost 6 figures). For me, the main issues are:
Their email was incredibly patronising, like others have reported, and they acted like they're doing us a huge favour by blessing us with a partial refund in account credits (not even real money). But we need to explain to them first how we promise to be better and not do a silly mistake like this again!
Twilio tries to push you into agreeing not to dispute the bank charges (see the link above for why they do this). I refused to agree to this, and first wanted to know exactly how much they would refund us, and if they would refund us in real money, not account credits (they agreed to "prioritize" this).
They told us that their finance team is who decides the refund amount, based on the information we provide on how we'll do better and a breakdown of the charges. I told them exactly what we did to combat this, and what the charges were. We had lost a few hundred in startup credits, then just over £2k in real money.
Instead of telling me how much they would refund (remember, I still haven't agreed not to dispute the charges, which they "required" in order to issue a refund), they went ahead and refunded us £847 and some change immediately.
I believe this to be a ploy to try and prevent us from disputing the original charges, because if we dispute now, we would have more back than what they charged.
I sought some advice, with mixed opinions, but it seems quite clear that if we dispute these charges, at the very least it would mean that we can no longer use Twilio for SMS anymore (which I don't want to anyway). But, this means switching to a different provider before disputing.
It would be relatively easy to switch, as they all tend to work the same way anyway, but would still require:
This is not difficult, but time and effort that I don't have right now, as well as a distraction from our actual core product. I don't know if £1.1k is worth that "labour", or any extra stress that may come if Twilio decides to make a stink about this and pass us on to collections etc.
All I know is: Twilio, never again. I will advise people to not use Twilio for the rest of my life and longer depending on how that advice may spread and how long this article survives.
My brother's in New York and I was reminded of a scam we fell for there once. This wasn't the typical Time's Square Elmo-league stuff, but seemed quite legitimate! I wanted to recount the story in case it might help someone.
We were planning to visit the Empire State building (which by the way, wasn't that great, especially that foggy day) and when we arrived there we were shocked to see a queue going all around the block and across several streets. We were approached by a man named DeShawn Cassidy selling the New York Pass.
"You can leave. Your Wallet. At home," he says. "You can laugh at aaaaall these people," as he points to the massive queue, telling us we can skip it with the glorious New York Pass. It's fast-lane entry and cheaper tickets into the Empire State building and a bunch of other attractions around New York within a certain time period.
He was a very convincing and charismatic salesman. We asked him why the people in the queue aren't cleaning him out if it's so good. He threw his hands up and said, "It behooves me!" misunderstanding what that word means.
We paid him $80 for 5 passes I believe, which was a great deal. He rubbed his hands like a fly about to have a meal as we were taking the money out, and gave us a receipt, staking his name and reputation on it, "DeShawn Cassidy", and that we can call him at any time if we need anything.
Of course, you know how the rest of the story goes. DeShawn was all but erased from existence, and we didn't have the opportunity to "laugh at all these people" as the security made us queue like everyone else. The special entrances were only for people who actually worked in the building.
We thought that maybe there's a faster queue inside, after clearing the building queue, and at least we don't need to get new tickets. Wrong again! The man at the till took one look at our little plastic cards, and in the strongest New York accent that still rings in my mind to this day, said the infamous words:
New York Pass? Don't do nothin'!
Yesterday evening I had a call with three founders looking for some advice on specific things. Something that came up was how to make a proper pitch deck. My advice is usually to go to Slidebean and check out the pitch decks of some well-known companies. There's a clear pattern to how these are structured, depending on who the target of the deck is.
But recently, a different founder sent me a pitch deck asking for feedback and he used a platform called Tome[1], and his slides were pretty cool, and when viewed on that platform could even have little video bubbles where he explains the slide. At first I though this was a GPT-3-based slide generator (similar to ChatBA (formerly ChatBCG)) but it seems to be more than that and looks like it could be a great tool for putting together a pitch deck on a whim!
Referral link, not sponsored ↩︎
Great article on some ways to interact with ChatGPT: https://oneusefulthing.substack.com/p/how-to-use-chatgpt-to-boost-your. I find it funny that so many people speak to ChatGPT politely (I do too). I wonder if post-singularity we'll be looked upon more favourably than the impolite humans.
Last weekend I built a small AI product: https://deen.ai. Over the course of the week I've been gathering feedback from friends and family (Muslim and non-Muslim). In the process I learned a bunch and made things that will be quite useful for future projects too. More info here!
A while ago I dug into my DNA via a number of services. I had the uncommon opportunity of being able to compare the results of two services (while only really paying for one). Now I finally got around to writing this up and might update it over time as I do more genealogy-related things. https://yousefamar.com/memo/notes/my/dna/
In my previous post I made a little block diagram. Here's the workflow for how I did that: https://yousefamar.com/memo/articles/writing/graphviz/
If you happen to have checked my main feed page in the past few days, you might have notice I've added a box to subscribe to a newsletter. This is meant to be a weekly digest of the posts I make the week before, delivered to your email inbox.
I think I'm getting close to figuring out a good system for content pipelines, though I still think about it a lot. As such, this newsletter part will mostly be an experiment for now. It won't be an automated email that summarises my posts, but rather I'm going to write it myself to begin with. I'd like to follow a style like the TLDR newsletter, which I've been following since they launched. This means e.g. a summary of cool products I might have bookmarked throughout the week, which might also give me the opportunity/excuse to review and organise them.
I'm not convinced that the medium of newsletters is the right way to consume content. I for one am a religious user of kill-the-newsletter to turn newsletters into Atom feeds. A lot of people consume content via their email inboxes though, and it seems easier to go from that to the feed format, rather than the other way around at the moment. At any rate, I want to create these various ways of consuming content. The pipeline for this content might look like this:
The other consideration is visibility of my audience. I don't actually know if anyone reads what I write unless they tell me (hi James!), and unless I put tracking pixels and such in my posts, but is it really that important? With email, you have a list of subscribers, which probably gives you slightly more data over feed readers polling for updates to your feed, but again, I don't really want to be responsible for a list of emails, and I don't like being at the mercy of the big email providers' spam filters if I want to send email from my own domain (yes, this is despite SPF/DKIM and all that, based on some voodoo you can still reach people's junk folder).
So I'm thinking for now I probably don't even really care who reads what I write, and if it becomes relevant (e.g. if I want to find out what people would like to see more of), I can publish a poll.
Not too long ago I mentioned that the search engines will need to add ChatGPT-like functionality in order to stay relevant, that there's already a browser extension that does this for Google, and that Google has declared code red. Right on schedule, yesterday Microsoft announced that they're adding ChatGPT to Bing. (If you're not aware, Microsoft is a 10-figure investor in OpenAI, and OpenAI has granted an exclusive license to Microsoft, but let's not get into how "open" OpenAI is).
I heard about this via this HackerNews post and someone in the comments (can't find it now) was saying that this will kill original content as we know it because traffic won't go to people's websites anymore. After all, why click through to websites, all with different UIs and trackers and ads, when the chat bot can just give you the answers you're looking for as it's already scraped all that content. To be honest, if this were the case, I'm not so sure if it's such a bad thing. Let me explain!
First of all, have you seen the first page of Google these days? It's all listicles, content marketing, and SEO hacks. I was not surprised to hear that more and more people use TikTok as a search engine. I personally add "site:reddit.com" to my searches when I'm trying to compare products for example, to try and get some kind of real human opinions, but even that might not be viable soon. You just can't easily find what you need anymore these days without wading through ads and spam.
Monetising content through ads never really seemed like the correct approach to me (and I'm not just saying that as a consistent user of extensions that block ads and skip sponsored segments in YouTube videos). It reminds me a lot of The Fable of the Dragon-Tyrant. I recommend reading it as it's a useful metaphor, and here's why it reminds me (skip the rest of this paragraph if you don't want spoilers): there's a dragon that needs to be fed humans or it would kill everyone. Entire industries spring up around the efficient feeding of the dragon. When humans finally figured out how to kill it, there was huge resistance, as among other things, "[t]he dragon-administration provided many jobs that would be lost if the dragon was slaughtered".
I feel like content creators should not have to rely on ads in the first place in order to be able to create that content. I couldn't tell you what the ideal model is, but I really prefer the Patreon kind of model, which goes back to the ancient world through art patronage. While this doesn't make as much money as ads, I feel like there will come a point where creating content and expressing yourself is so much easier/cheaper/faster than it is today, that you won't have high costs to maintain it on average (just look at TikTok). From the other side, I feel like discovery will become so smooth and accurate, that all you need to do is create something genuinely in demand and it will be discovered on its own, without trying to employ growth hacks and shouting louder than others. I think this will have the effect that attention will not be such a fiery commodity. People will create art primarily for the sake of art, and not to make money. Companies will create good products, rather than try to market worthless cruft. At least that's my ideal world.
So how does ChatGPT as a search engine affect this? I would say that this should not affect any kinds of social communication. I don't just mean social media, but also a large subset of blogs and similar. I think people will continue to want to follow other people, even the Twitter influencer that posts business tips, rather than ask ChatGPT "give me the top 5 business tips". I believe this for one important reason: search and discovery are two different things. With search, there is intent: I know what I don't know, and I'm trying to find out. With discovery, there isn't: I don't know what I don't know, but I loiter in places where things I would find interesting might appear, and stumble upon them by chance.
Then there's the big question of having a "knowledge engine" skipping the sources. Let's ignore the problem of inaccurate information[1] for now. I would say that disseminating knowledge at the moment is an unsolved problem, even through peer-reviewed, scientific journal papers and conference proceedings (this is a whole different topic that I might write about some day, but I don't think it's a controversial view that peer-review and scientific publishing is very, very broken).
I do not believe that the inability to trace the source of a certain bit of knowledge is necessarily the problem. I also don't believe that it's necessarily impossible, but lets pretend that it is. It would be very silly I think to cite ChatGPT for some fact. I would bet that you could actually get a list of references to any argument you like ("Hey ChatGPT, give me 10 journal citations that climate change is not man-made").
I think the biggest use cases of ChatGPT will be to search for narrowly defined information ("what is the ffmpeg
command to scale a video to 16:9?") and discover information and vocabulary on topics that you know little about in order to get a broad overview of a certain landscape.
However, I don't see ChatGPT-powered search killing informative articles written by humans. I see AI-generated articles killing articles generated by humans. "Killing" in the sense that they will be very difficult to find. And hey, if ChatGPT could actually do serious research, making novel contributions to the state-of-the-art, while citing prior work, then why shouldn't that work be of equal or greater value to the human equivalent?
In the case of AI-generated garbage drowning out good human articles just by sheer quantity though, what's the solution? I think there are a number of things that would help:
Overall I think that ChatGPT as the default means of finding information is a net positive thing and may kill business models that were flawed from the start, making way for something better.
I've had this problem with normal Google before (the information cards that try to answer your questions). For a long time (even after I reported it), if you searched something like "webrtc connection limit", you would get the wrong answer. Google got this answer from a StackOverflow answer that was a complete guess as far as I could tell. Fortunately, the person who asked the question eventually marked my answer as the correct one (it already had 3x more upvotes than the wrong one) although the new answer never showed up in a Google search card as far as I can tell. ↩︎
I finally wrote an article on my thoughts about ChatGPT after a lot of repeated questions/answers from/to people: https://yousefamar.com/memo/articles/ai/chatgpt/
This is one of those things where I'm not sure it should really be an "article" but instead something more akin to a living document that I update continuously, maybe with a chronological log included. At the same time, a lot of the content is temporally bound and will probably lose relevance quite fast. Something to figure out in the future!
Amarbot was using GPT-J (fine-tuned on my chat history) in order to talk like me. It's not easy to do this if you follow the instructions in the main repo, plus you need a beefy GPU. I managed to do my training in the cloud for quite cheap using Forefront. I had a few issues (some billing-related, some privacy-related) but it seems to be a small startup, and the founder himself helped me resolve these issues on Discord. As far as I could see, this was the cheapest and easiest way out there to train GPT-J models.
Unfortunately, they're shutting down.
As of today, their APIs are still running, but the founder says they're winding down as soon as they send all customers their requested checkpoints (still waiting for mine). This means Amarbot might not have AI responses for a while soon, until I find a different way to run the model.
As for fine-tuning, there no longer seems to be an easy way to do this (unless Forefront open sources their code, which they might, but even then someone has to host it). maybe#6742 on Discord has made a colab notebook that fine-tunes GPT-J in 8-bit and kindly sent it to me.
I've always thought that serverless GPUs would be the holy grail of the whole microservices paradigm, and it might be close, but hopefully that would make fine-tuning easy and accessible again.
My friend Selvan sent me this puzzle:
Feel free to give it a try before revealing my thought process and solution! Also, in case you're wondering, the sticks do have to have marshmallows on both ends, and they're straight, and marshmallows can't be in the same position or at infinity. Also, the sticks can cross (this doesn't violate the "2D" requirement). None of this was obvious to me!
At first, I looked at this as a graph. The graph is undirected and the vertices unlabelled. There are two possible edge weights, and the graph is not allowed to violate the triangle inequality. Intuitively, whenever edge weights are involved, I think of force-directed graphs (like a spring system with different length springs) that relax into a configuration where there's no tension in the springs.
Anyway, if you think about it as a graph, you'll realise that topologically, the first configuration is exactly the same as a square with an X in it. In fact, it's not possible for any other configuration to exist, as a graph with 4 vertices and 6 edges is completely connected. This means that we can't play around with topology, only the edge weights (or rather, move the vertices around, if you think of it that way).
There is no alternative layout where a fourth vertex is inside a triangle like the example, so the vertices *must* be in a quadrilateral layout. If you then build a trapezium using three long sticks and one short stick, you'll quickly see that there's a layout at which the shorter ones are all the same length. I made a visualisation to help illustrate this:
Afterwards, Selvan prompted me to realise that the distance between the bottom left corner and the point of intersection in the middle of the X should be the same as the red line distance, answering at which point exactly the vertices along the red lines are equidistant from each other!
Obsidian Canvas was released today and I find this very exciting! As you might know, I'm a very visual thinker and try to organise my thoughts in ways that are more intuitive to me. I've always thought that an infinite canvas that you can place nested collapsible components and primitives on makes much more sense than a directory tree. I've used other tools for this, but the separation from my PKM tool (Obsidian) has always been a big barrier.
Obsidian keeps getting better over time! It seems the canvas format is relatively simple, where I reckon I could have these be publishable. More importantly though, I think it would be quite useful to organise my thoughts internally. Currently I use a combination of whiteboard wallpaper, actual paper, and Samsung Notes on my S22 Ultra; the only not-bad Android note-taking app with good stylus support, but frustratingly it doesn't let you scroll the page infinitely in the horizontal direction!
It can be a bit frustrating to try and manipulate a canvas without over-reliance on a mouse, but I don't think there are any ergonomic ways to interact well with these besides a touch screen, and at least the keyboard shortcuts for Canvas seem good. When AR becomes low-friction, I hope to very soon be able to use 3D spaces to organise documents and assets, in a true mind palace. For now, Obsidian Canvas will do nicely though!
/u/dismantlemars created a colab to run OpenAI's new Point-E model that you can use here. My first few experiments were interesting though not very usable yet! Supposedly it's thousands of times faster than DreamFusion though (the most well known crack at this). It took me about 30 secs to generate models, and converting the point cloud to a mesh was instant.
I tried to first turn my profile picture into 3D, which came out all Cronenberg'd. To be fair, the example images are all really clean renderings of 3D models, rather than a headshot of a human.
Then I tried the text prompt "a pink unicorn" which came out as an uninteresting pink blob vaguely in the shape of a rocking horse. Simply "unicorn" looked a bit more like a little dinosaur.
And finally, "horse" looked like a goat-like horse in the end.
The repo does say that the text to point cloud model, compared to the image to point cloud model is "small, worse quality [...]. This model's capabilities are limited, but it does understand some simple categories and colors."
I still find it very exciting that this is even possible in the first place. Probably less than a year ago, I spoke to the anything.world team, and truly AI-generated models seemed so far out of reach. Now I feel like it won't be much longer before we can populate entire virtual worlds just by speaking!
On a related note, I recommend that you join the Luma waitlist for an API over DreamFusion.
There are APIs out there for translating natural language to actions that a machine can take. An example from wit.ai is the IoT thermostat use case.
But why not instead use GPT-3? It ought to be quite good at this. And as I suspected, the results were quite good! The green highlighted text is AI-generated (so were the closing braces, but for some reason it didn't highlight those).
I think there's a lot here that can be expanded! E.g. you could define a schema beforehand rather than just give it some examples like I have, but I quite like this test-driven approach of defining what I actually want.
I did some tweaks to teach it that I want it to put words in my mouth as it were. It invented a new intent that I hadn't defined, so it would probably be useful to define an array of valid intents at the top. It did however manage to sweet-talk my "wife"!
I think this could work quite well in conjunction with other "modules", e.g. a prompt that takes a recipient
, and a list of people I know (and what their relationship is to me), and outputs a phone number for example.
Amazon's creating AI-generated animated bedtime stories (story arc, images, and accompanying music) with customisable setting, tone, characters, and score. I believe that procedurally generated virtual worlds will be one of the prime use cases for these large models, and this is one example that I expect to see more of!
I think the most difficult part here will be to craft truly compelling and engaging stories, though this is probably soon to be solved. My brother and I attempted a similar project (AI-generated children's books) and the quality overall was not good enough at the time, but at the speed these things move I expect that to be a thing of the past in a matter of months!
Yesterday GitHub Copilot engineers borked production and I felt like someone had suddenly turned the lights off.
I hadn't realised how accustomed I had become to using it until this happened. I would make my intent clear in the code, then wait for it to do its thing, then it just wouldn't. Y'all got any more of them AIs?
At the same time, the next time you deploy a bad build to production, remember that even the big guys do it!
I wrote an article on bruteforcing Tailscale domain names (code included!): https://yousefamar.com/memo/articles/hacks/tailnet-name/
I'm letting day-nft.com expire. This was an experiment with 3 other people where we minted simple NFTs that each correspond to a different date going back something like 10 years. The technical part was relatively straightforward, but we realised that the whole thing is just one big hype game, and in order for it to succeed we would need to do things that we weren't comfortable with morally, so we abandoned the project. At that point I had already done some research and analysis on NFT marketplaces (which I intent to publish at some point) that helped me cement the current views I hold about this space.
Seems like GPT-4 is just around the corner! I'm really looking forward to it and not just the improvement on GPT-3, but the multi-modal inputs. I really think GPT-4 and models like it will be central to our future.
Nvidia's new diffusion model is really pushing the envelope. A lot of exciting capabilities!
I'm certain the market for GPT3-based spreadsheet plugins/add-ons is ripe for sales much more than libraries that target developers like cerebrate.ai. I've seen a general-purpose add-on for Google Sheets here, but I think that crafting these prompts to do specific things and wrapping these in higher-level functions has much more potential.
More Stable Diffusion resource links: https://rentry.org/sdupdates2
It's official — Amarbot has his own number. I did this because I was using him to send some monitoring messages to WhatsApp group chats, but since it was through my personal account, it would mark everything before those messages as read, even though I hadn't actually read them.
My phone allows me to have several separate instances of WhatsApp out of the box, so all I needed was another number. I went for Fanytel to get a virtual number and set up a second WhatsApp bridge for Matrix. Then I also put my profile picture through Stable Diffusion a few times to make him his own profile picture, and presto: Amarbot now has his own number!
In case the profile picture is not clear enough, the status message also says that he's not real. I have notifications turned off for this number, so if you interact with him, don't expect a human to ever reply!
Some of my HNS domains are expiring soon and I don't think I'll renew them. While the concept is super cool, unless Chrome and Safari adopt HNS, it'll never go anywhere. I now think it's very unlikely that they ever will.
I wrote an article on Y Combinator and the drama with DreamWorld: https://yousefamar.com/memo/articles/entrepreneurship/y-combinator/
Almost exactly 6 years ago, I ate too many Pringles, as reminded by my photo app throwback. My brother won a contest where the prize was crates of Pringles and he gave me all the sour cream and onion ones. I ate too many of them in too short a time and since then I kind of lost my taste for them. The same thing happened to me with peanuts — I used to love them and now I basically never eat them.
When I was a student, I got an oyster photocard for commuting with a discount. Eventually I also had my railcard added to this (though IIRC, the discounts aren't cumulative). I had it renewed right at the last possible moment before expiry and aging out, and the new card was meant to expire on the 31st of Jan 2020. It never did and I've been using it since — maybe expiry meant the discount?
Eventually the outermost plastic layers peeled off (the layer with my name and photo on it) leaving an ominous blank card.
The card number was also peeled off, so when I had an incomplete trip one day, while getting that sorted, a friendly TFL employee let me know what it was on a receipt of my past few journeys. Only then did I really think about what the point of using an oyster card is (since I'm not getting discounts anymore) over a contactless credit card.
It seems there isn't really much of a benefit for me, so I'll probably just let it run out and stop using it. I might draw a little picture in that empty spot.
I had a normal oyster card many many years ago (before the first photocard) that I at some point added to the online dashboard with 60p still on it. I had given this oyster card to a homeless lady thinking there was more than that on it and she probably tossed it. I reckon if I plan my last trip in such a way that the balance goes to -60p, then never top it up again, then my overall balance with TFL should be... well, balanced!
Hello twitter! This post was syndicated using Bridgy.
As of today, if you react to a message you send me on WhatsApp with a robot emoji (🤖), Amarbot will respond instead of me. As people in the past have complained about not knowing when I'm me and when I'm a bot, I added a very clear disclaimer to the bottom of all bot messages. This is also so I can filter them out later if/when I want to retrain the model (similar to how DALL-E 2 has the little rainbow watermark).
The reason I was able to get this to work quite easily is thanks to my existing Node-RED setup. I'll talk more about this in the future, but essentially I have my WhatsApp connected to Matrix, and Node-RED also connected to Matrix. I watch for message reactions but because those events don't tell you what the actual text of the message is that was reacted to was, only the ID, I store a small window of past messages to check against. Then I query the Amarbot worker with the body of that message and format and respond with the reply.
This integrates quite seamlessly with other existing logic I had, like what happens if you ask me to tell you a joke!
Amarbot has been trained on the entirety of my WhatsApp chat logs since the beginning of 2016, which I think is when I first installed it. There are a handful of days of logs missing here and there as I've had mishaps with backing up and moving to new phones. It was challenging to extract my chat logs from my phone, so I wrote an article about this.