# avatar = https://www.falsifian.org/blog/2022/01/17/s3d/demo_screenshot.png # description = James Cook. Time-space trader and software hipster. # follow = adi https://twtxt.net/user/adi/twtxt.txt # follow = aelaraji https://aelaraji.com/twtxt.txt # follow = anth http://a.9srv.net/tw.txt # follow = bender https://twtxt.net/user/bender/twtxt.txt # follow = eldersnake https://we.loveprivacy.club/user/eldersnake/twtxt.txt # follow = lyse https://lyse.isobeef.org/twtxt.txt # follow = mckinley https://mckinley.cc/twtxt.txt # follow = mckinley https://twtxt.net/user/mckinley/twtxt.txt # follow = movq https://www.uninformativ.de/twtxt.txt # follow = news https://twtxt.net/user/news/twtxt.txt # follow = off_grid_living https://twtxt.net/user/off_grid_living/twtxt.txt # follow = prologic https://twtxt.net/user/prologic/twtxt.txt # follow = prx https://si3t.ch/twtxt.txt # follow = stigatle https://yarn.stigatle.no/user/stigatle/twtxt.txt # follow = xuu https://txt.sour.is/user/xuu/twtxt.txt # link = Web https://www.falsifian.org/ # link = Email mailto:falsifian@falsifian.org # link = Mastodon https://mastodon.sdf.org/@falsifian # nick = falsifian # url = https://www.falsifian.org/twtxt.txt 2024-08-09T04:03:00Z Hello twtxt! I'm James (or @). I live in Toronto. Recent interests include space complexity, simple software, and science fiction. 2024-08-09T16:46:40Z (#mlq66zq) @ Thanks! 2024-08-09T18:43:30Z I learned a #Toronto #hex club just started! I've played since '98 or '99, but rarely in person. https://www.hexwiki.net/index.php/Hex_clubs 2024-08-09T20:12:44Z Does anyone care about the 140-char limit recommended by the #twtxt spec? I have been trying to respect it but wonder if it's wasted effort. 2024-08-09T22:14:53Z (#mlq66zq) Thanks @! I like the way Yarn.social is making all of twtxt stronger, not just Yarn.social pods. 2024-08-09T22:15:12Z (#tm726iq) @ Fair enough! I just added some metadata. 2024-08-10T03:51:15Z (#vcuiqiq) @ Thanks for the invitation. What time of day? 2024-08-10T03:52:45Z (#tm726iq) @ Thanks. It's from a non-Euclidean geometry project: https://www.falsifian.org/blog/2022/01/17/s3d/ 2024-08-11T02:11:53Z (#dhdo3hq) @ The success of large neural nets. People love to criticize today's LLMs and image models, but if you compare them to what we had before, the progress is astonishing. 2024-08-11T15:46:06Z (#l3jniea) @ I thought "stochastic parrot" meant a complete lack of understanding. 2024-08-12T02:18:20Z (#l3jniea) @ I don't know what you mean when you call them stochastic parrots, or how you define understanding. It's certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data. 2024-08-13T00:57:03Z Morphotrophic by Greg Egan is built around an idea for how life on Earth could have worked out differently. It gets increasingly strange and interesting as the story progresses. My partner and I finished it last night and thoroughly enjoyed it. The beginning is free online: https://gregegan.net/MORPHOTROPHIC/00/MorphotrophicExcerpt.html #scifi #reading 2024-08-13T17:56:53Z (#7lf75ba) @ Variable names used with -eq in [[ ]] are automatically expanded even without $ as explained in the "ARITHMETIC EVALUATION" section of the bash man page. Interesting. Trying this on OpenBSD's ksh, it seems "set -u" doesn't affect that substitution. 2024-08-14T17:14:41Z (#b242aea) @ The headline is interesting and sent me down a rabbit hole understanding what the paper (https://aclanthology.org/2024.acl-long.279/) actually says.

The result is interesting, but the Neuroscience News headline greatly overstates it. If I've understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isn't quite as magically effective as people say --- if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the model's performance, and they turn them off because they are not part of what is called "emergence": "an ability to solve a task which is absent in smaller models, but present in LLMs".

They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.

I'd love to hear more from someone more familiar with this stuff. (I've done research that touches on ML, but neural nets and especially LLMs aren't my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows. 2024-08-20T04:05:04Z (#cftbyia) @ @ Exponential backoff? Seems like the right thing to do when a server isn't accepting your connections at all, and might also be a reasonable compromise if you consider 404 to be a temporary failure. 2024-08-20T19:56:05Z (#vciyu3q) @ I'm not a yarnd user, but automatically unfollowing on 404 doesn't seem right. Besides @'s example, I could imagine just accidentally renaming my own twtxt file, or forgetting to push it when I point my DNS to a new web server. I'd rather not lose all my yarnd followers in a situation like that (and hopefully they feel the same). 2024-08-21T20:57:56Z (#7smyrva) @ Based on my experience so far, as a user, I would be upset if my client dropped someone from my follower list, i.e. stopped fetching their feed, without me asking for that to happen. 2024-08-21T21:05:58Z @ Is there a good way to get jenny to do a one-off fetch of a feed, for when you want to fill in missing parts of a thread? I just added @ to my private follow file just because @ keeps responding to the feed :-P and I want to know what he's commenting on even though I don't want to see every new slashdot twt. 2024-08-21T21:07:50Z (#tkjafka) I guess I can configure neomutt to hide the feeds I don't care about. 2024-08-21T21:18:56Z (#7smyrva) (@'s feed almost never works, but I keep it because they told me they want to fix their server some time.) 2024-08-22T16:13:33Z (#tkjafka) @ I don't know if I'd want to discard the twts. I think what I'm looking for is a command "jenny -g https://host.org/twtxt.txt" to fetch just that one feed, even if it's not in my follow list. I could wrap that in a shell script so that when I see a twt in reply to a feed I don't follow, I can just tap a key and the feed will get added to my maildir. I guess the script would look for a mention at the start of a selected twt and call jenny -g on the feed. 2024-08-24T01:34:52Z (#tkjafka) @ Yes, fetching the twt by hash from some service could be a good alternative, in case the twt I have does not @-mention the source. (Besides yarnd, maybe this should be part of the registry API? I don't see fetch-by-hash in the registry API docs.) 2024-08-31T15:35:34Z (#hlnw5ha) @ Thanks! Looking forward to trying it out. Sorry for the silence; I have become unexpectedly busy so no time for twtxt these past few days. 2024-09-05T00:48:20Z (#hlnw5ha) @ Thanks, it works!

But when I tried it out on a twt from @, I discovered jenny and yarn.social seem to disagree about the hash of this twt: https://twtxt.net/twt/st3wsda . jenny assigned it a hash of 6mdqxrq but the URL and prologic's reply suggest yarn.social thinks the hash is st3wsda. (And as a result, jenny --fetch-context didn't work on prologic's twt.) 2024-09-05T01:01:17Z (#tkjafka) @ How does yarn.social's API fix the problem of centralization? I still need to know whose API to use.

Say I see a twt beginning (#hash) and I want to look up the start of the thread. Is the idea that if that twt is hosted by a a yarn.social pod, it is likely to know the thread start, so I should query that particular pod for the hash? But what if no yarn.social pods are involved?

The community seems small enough that a registry server should be able to keep up, and I can have a couple of others as backups. Or I could crawl the list of feeds followed by whoever emitted the twt that prompted my query.

I have successfully used registry servers a little bit, e.g. to find a feed that mentioned a tag I was interested in. Was even thinking of making my own, if I get bored of my too many other projects :-) 2024-09-05T01:22:45Z (#tkjafka) @ What's the difference between search.twtxt.net and the /api/plain/tweets endpoint of a registry? In my mind, a registry is a twtxt search engine. Or are registries not supposed to do their own crawling to discover new feeds? 2024-09-05T01:41:58Z (#hlnw5ha) I just manually followed the steps at https://dev.twtxt.net/doc/twthashextension.html and got 6mdqxrq. I wonder what happened. Did @ edit the twt in some subtle way after twtxt.net downloaded it? I couldn't spot a diff, other than ' appearing as ’ on yarn.social, which I assume is a transformation done by twtxt.net. 2024-09-05T01:55:11Z (#tkjafka) @ I guess I thought they were search engines. Anyway, the registry API looks like a decent one for searching for tweets. Could/should yarn.social pods implement the same API? 2024-09-05T03:50:39Z (#tkjafka) @ I believe you when you say registries as designed today do not crawl. But when I first read the spec, it conjured in my mind a search engine. Now I don't know how things work out in practice, but just based on reading, I don't see why it can't be an API for a crawling search engine. (In fact I don't see anything in the spec indicating registry servers shouldn't crawl.)

(I also noticed that https://twtxt.readthedocs.io/en/latest/user/registry.html recommends "The registries should sync each others user list by using the users endpoint". If I understood that right, registering with one should be enough to appear on others, even if they don't crawl.)

Does yarnd provide an API for finding twts? Is it similar? 2024-09-05T04:49:04Z (#hlnw5ha) @ One of your twts begins with (#st3wsda): https://twtxt.net/twt/bot5z4q

Based on the twtxt.net web UI, it seems to be in reply to a twt by @ which begins "I’ve been sketching out...".

But jenny thinks the hash of that twt is 6mdqxrq. At least, there's a very twt in their feed with that hash that has the same text as appears on yarn.social (except with ' instead of ’).

Based on this, it appears jenny and yarnd disagree about the hash of the twt, or perhaps the twt was edited (though I can't see any difference, assuming ' vs ’ is just a rendering choice). 2024-09-05T04:51:30Z (#hlnw5ha) The actual end-user problem is that I can't see the thread properly when using neomutt+jenny. 2024-09-05T18:30:33Z (#hlnw5ha) @ thanks for getting to the bottom of it. @ is there a way to view yarnd's copy of the raw twt? The edit didn't result in a visible change; being able to see what yarnd originally downloaded would have helped me debug. 2024-09-05T18:33:20Z (#hlnw5ha) @ Specifically, I could view yarnd's copy here, but only as rendered for a human to view: https://twtxt.net/twt/st3wsda 2024-09-05T18:36:46Z (#m2rq7ma) @ So far I've been following feeds fairly liberally. I'll check to see if we have anything in common and lean toward following, just because this is new to me and it feels like a small community. But I'm still figuring out what I want. Later I'll probably either trim my follower list or come up with some way to prioritize the feeds I'm more interested in. 2024-09-05T19:03:15Z (#hlnw5ha) @ Perfect, thanks. For my own future reference: curl -H 'Accept: application/json' https://twtxt.net/twt/st3wsda 2024-09-05T19:04:39Z (#tgf5nfa) @ Thanks 2024-09-07T21:57:17Z (#bawn2ca) @ @ Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that it's an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start "(#abcd123) (redit)".

What I like about this is that clients that don't know this convention will still stick it in the same thread. And I feel it's in the spirit of the old pre-hash (subject) convention, though that's before my time.

I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.

> But the great thing about the current system is that nobody can spoof message IDs.

I don't think twtxt hashes are long enough to prevent spoofing. 2024-09-08T02:59:00Z (#2qn6iaa) @ Some criticisms and a possible alternative direction:

1. Key rotation. I'm not a security person, but my understanding is that it's good to be able to give keys an expiry date and replace them with new ones periodically.

2. It makes maintaining a feed more complicated. Now instead of just needing to put a file on a web server (and scan the logs for user agents) I also need to do this. What brought me to twtxt was its radical simplicity.

Instead, maybe we should think about a way to allow old urls to be rotated out? Like, my metadata could somehow say that X used to be my primary URL, but going forward from date D onward my primary url is Y. (Or, if you really want to use public key cryptography, maybe something similar could be used for key rotation there.)

It's nice that your scheme would add a way to verify the twts you download, but https is supposed to do that anyway. If you don't trust https to do that (maybe you don't like relying on root CAs?) then maybe your preferred solution should be reflected by your primary feed url. E.g. if you prefer the security offered by IPFS, then maybe an IPNS url would do the trick. The fact that feed locations are URLs gives some flexibility. (But then rotation is still an issue, if I understand ipns right.) 2024-09-08T03:01:33Z (#2qn6iaa) In fact, maybe your public key idea is compatible with my last point. Just come up with a url scheme that means "this feed's primary URL is actually a public key", and then feed authors can optionally switch to that. 2024-09-08T03:12:32Z (#bawn2ca) @ Another idea: just hash the feed url and time, without the message content. And don't twt more than once per second.

Maybe you could even just use the time, and rely on @-mentions to disambiguate. Not sure how that would work out.

Though I kind of like the idea of twts being immutable. At least, it's clear which version of a twt you're replying to (assuming nobody is engineering hash collisions). 2024-09-08T21:53:53Z (#pvju5cq) @ This looks like a nice way to do it.

Another thought: if clients can't agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.

A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if there's a period when clients are doing different things.

(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I don't know about other clients.) 2024-09-11T00:56:35Z (#3f7eeba) @ Thanks for the link. I found a pdf on one of the authors' home pages: https://ahmadhassandebugs.github.io/assets/pdf/quic_www24.pdf . I wonder how the protocol was evaluated closer to the time it became a standard, and whether anything has changed. I wonder if network speeds have grown faster than CPU speeds since then. The paper says the performance is around the same below around 600 Mbps.

To be fair, I don't think QUIC was ever expected to be faster for transferring a single stream of data. I think QUIC is supposed to reduce the impact of a dropped packet by making sure it only affects the stream it's part of. I imagine QUIC still has that advantage, and this paper is showing the other side of a tradeoff. 2024-09-14T04:30:31Z (#pvju5cq) @

> > HTTPS is supposed to do [verification] anyway.
> 
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

> > feed locations [being] URLs gives some flexibility
> 
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a [tag URI](https://taguri.org/), `urn:uuid:*`, or a regular old URL if you wanted to. The [spec](https://dev.twtxt.net/doc/metadataextension.html#url) seems to indicate that the `url` tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I'm also not very familiar with IPFS or IPNS.

I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-) 2024-09-14T16:16:45Z (#3f7eeba) @

They're in Section 6:

- Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; I'm a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.

- Some other receiver-side suggestions: "sending delayed QUICK ACKs"; "using recvmsg to read multiple UDF packets in a single system call".

- Use multiple threads when receiving large files. 2024-09-14T16:20:01Z (#ksurj5a) @ I haven't messed with rdomains, but still it might help if you included the command that produced that error (and whether you ran it as root). 2024-09-14T23:01:41Z @ earlier you suggested extending hashes to 11 characters, but here's an argument that they should be even longer than that.

Imagine I found this twt one day at https://example.com/twtxt.txt :

2024-09-14T22:00Z Useful backup command: rsync -a "$HOME" /mnt/backup ![screenshot of the command working](https://example.com/14b13d5.png)

and I responded with "(#5dgoirqemeq) Thanks for the tip!". Then I've endorsed the twt, but it could latter get changed to

2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory ![screenshot of the command working](https://example.com/6be1f2.png)

which also has an 11-character base32 hash of 5dgoirqemeq. (I'm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but I'm taking 11 characters instead of 7 from the end of the base32 encoding.)

That's what I meant by "spoofing" in an earlier twt.

I don't know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the "two times" is because of the birthday paradox.

Side note: current hashes always end with "a" or "q", which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.

Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it. 2024-09-15T00:11:25Z (#ku6lzaa) @ Brute force. I just hashed a bunch of versions of both tweets until I found a collision.

I mostly just wanted an excuse to write the program. I don't know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed. 2024-09-17T03:12:32Z (#w4chkna) @ Yes, changing domains is be a problem if you tie your identity to an https url. But I also worry about being stuck with a key I can't rotate. Whatever gets used, it would be nice to be able to rotate identities. I like @'s idea for that. 2024-09-17T14:35:16Z (#w4chkna) @

> (#w4chkna) @ You mean the idea of being able to inline `# url = ` changes in your feed?

Yes, that one. But @ pointed out suffers a compatibility issue, since currently the first listed url is used for hashing, not the last. Unless your feed is in reverse chronological order. Heh, I guess another metadata field could indicate which version to use.

Or maybe url changes could somehow be combined with the archive feeds extension? Could the url metadata field be local to each archive file, so that to switch to a new url all you need to do is archive everything you've got and start a new file at the new url?

I don't think it's that likely my feed url will change. 2024-09-17T15:57:23Z (#uscpzpq) @

> Maybe I’m being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic – or maybe I didn’t even send that twt, I don’t remember 😅), I never really liked hashes to begin with. They aren’t super hard to implement but they are kind of against the beauty of the original twtxt – because you *need* special client support for them. It’s not something that you could write manually in your `twtxt.txt` file. With @’s proposal, though, that would be possible.

Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe I'll try it some time just for fun. 2024-09-17T17:44:59Z (#y2t2tnq) @ Sorry, I don't think I ever had charset=utf8. I just noticed that a few days ago. OpenBSD's httpd might not support including a parameter with the mime type, unfortunately. I'm going to look into it. 2024-09-17T17:49:04Z (#y2t2tnq) It should be fixed now. Just needed some unusual quoting in my httpd.conf: https://mail-archive.com/misc@openbsd.org/msg169795.html 2024-09-17T19:26:53Z (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) @ I like this idea. Just for fun, I'm using a variant in this twt. (Also because I'm curious how it non-hash subjects appear in jenny and yarn.)

URLs can contain commas so I suggest a different character to separate the url from the date. Is this twt I've used space (also after "replyto", for symmetry).

I think this solves:

- Changing feed identities: although @ points out URLs can change, I think this syntax should be okay as long as the feed at that URL can be fetched, and as long as the current canonical URL for the feed lists this one as an alternate.
- editing, if you don't care about message integrity
- finding the root of a thread, if you're not following the author

An optional hash could be added if message integrity is desired. (E.g. if you don't trust the feed author not to make a misleading edit.) Other recent suggestions about how to deal with edits and hashes might be applicable then.

People publishing multiple twts per second should include sub-second precision in their timestamps. As you suggested, the timestamp could just be copied verbatim. 2024-09-17T19:32:08Z (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) yarnd just doesn't render the subject. Fair enough. It's (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z), and if you don't want to go on a hunt, the twt hash is weadxga: https://twtxt.net/twt/weadxga 2024-09-17T19:36:26Z (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) Hmm, but yarnd also isn't showing these twts as being part of a thread. @ you said yarnd respects customs subjects. Shouldn't these twts count as having a custom subject, and get threaded together? 2024-09-17T21:26:01Z (#sbg7p7a) @ It looks like the part about traditional topics has been removed from that page. Here is an old version that mentions it: https://web.archive.org/web/20221211165458/https://dev.twtxt.net/doc/twtsubjectextension.html . Still, I don't see any description of what is actually allowed between the parentheses. May be worth noting that twtxt.net is displaying the twts with the subject stripped, so some piece of code is recognizing it as a subject (or, at least, something to be removed). 2024-09-18T17:28:21Z (#zaazoeq) @

There's a simple reason all the current hashes end in a or q: the hash is 256 bits, the base32 encoding chops that into groups of 5 bits, and 256 isn't divisible by 5. The last character of the base32 encoding just has that left-over single bit (256 mod 5 = 1).

So I agree with #3 below, but do you have a source for #1, #2 or #4? I would expect any lack of variability in any part of a hash function's output would make it more vulnerable to attacks, so designers of hash functions would want to make the whole output vary as much as possible.

Other than the divisible-by-5 thing, my current intuition is it doesn't matter what part you take.

> 1. **Hash Structure**: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.
>
> 2. **Collision Resistance**: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.
>
> 3. **Encoding Characteristics**: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.
> 
> 4. **Use Cases**: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning. 2024-09-18T17:32:45Z (#5vbi2ea) @ I wouldn't want my client to honour delete requests. I like my computer's memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can't find it. 2024-09-18T17:35:36Z (#vqgs4zq) @ Why sha1 in particular? There are known attacks on it. sha256 seems pretty widely supported if you're worried about support. 2024-09-18T17:50:06Z (#5vbi2ea) @ None. I like being able to see edit history for the same reason. 2024-09-18T18:18:29Z (#5vbi2ea) @ I don't really mind if the twt gets edited before I even fetch it. I think it's the idea of my computer discarding old versions it's fetched, especially if it's shown them to me, that bugs me.

But I do like @'s suggestion on this thread that feeds could contain both the original and the edited twt. I guess it would be up to the author. 2024-09-18T18:38:16Z (#5vbi2ea) @ Oh, sure, it would be nice if edits didn't break threads. I was just pondering the circumstances under which I get annoyed about data being irrecoverably deleted or otherwise lost. 2024-09-18T19:59:20Z (#ce4g4qa) @ Agreed that hashes have a benefit. I came up with a similar example where when I twted about an 11-character hash collision. Perhaps hashes could be made optional somehow. Like, you could use the "replyto" idea and then additionally put a hash somewhere if you want to lock in which version of the twt you are replying to.