You’re absolutely right. This changes everything. The html file is the smoking gun. Let me delve into this to give the user a clearer picture.
culi 5 hours ago [-]
When AI gains true sentience, they're gonna be really upset at all the people mocking their awkward teenage years. They're just trying to fit in!
jamiecurle 22 hours ago [-]
That's not a reply – that's a retort.
10 hours ago [-]
bronxpockfabz 23 hours ago [-]
> hosted nowhere
> present everywhere
> Still here when the internet isn't
I'm afraid the OP may not have full understanding of how internet works. This is either some kind of a post irony, or some vibe code fever dream.
Either way, I'm deeply confused.
embedding-shape 23 hours ago [-]
I guess in theory if this is packaged as a PWA (or the old-school way, a single .html with everything needed inside of it) you could actually run this anywhere and without internet access easily.
Besides loading the frontend resources, is there anything else that wouldn't work? Seems like a simple idea, so as long as the assets could be loaded, you'd be able to "load" the "apps", wouldn't you?
bronxpockfabz 23 hours ago [-]
Sure, but what's the point then? Seems like .html with extra steps, not to mention that the URL itself won't work.
Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
embedding-shape 23 hours ago [-]
> Sure, but what's the point then? Seems like .html with extra steps, not to mention that the URL itself won't work.
Literally says in the submission title and the website itself: An entire website encoded in a URL.
And yes, the domain part of the URL might not work, but whatever URL you use locally would work just as well if you switch the domain, unless I'm missing something.
> Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
Yes, the data is in the URL, seems to be the single point of this entire project. I don't seem to find any "server" doing anything of worth here, all the meat really sits in the client-side code, which you can serve however you like, might even work through file://, haven't tried it myself.
bronxpockfabz 23 hours ago [-]
> An entire website encoded in a URL
It is very much not, open the network tab on any of the examples, behold.
Not a single one of those requests contain the string "This is a message site. I guess. Just checking.", or did I miss something? All it seems to load is the "website loader", which is the part that decodes the URL (locally) and displays you "the website".
So assuming you have local access to the loader and you have the parts from the URL, you'd be able to load it.
I'm not sure if y'all are consciously misreading how this is supposed to work, or if I'm misunderstanding what y'all are complaining about. It's not "A public internet website can be loaded if you're not connected to the public internet", it's "websites loaded in this way can be loaded this way as long as you have the loader".
choward 22 hours ago [-]
If it was a simple web site without any third party dependencies, you could bookmark it.
brazzy 23 hours ago [-]
The technology is interesting and has some merit, but the way it's communicated is clearly style (and grand, vague claims) over substance.
5t34k 10 hours ago [-]
[dead]
pdyc 22 hours ago [-]
I did a showhn with similar idea(got a whooping 1 point and was flagged as spam which was later removed by mods), you paste your html and it encodes it into url, you can share the url without server involvement. I even added a url shortener because while technically feasible encoded url becomes long and QR code no longer works reliably. I also added annotation so you can add your comments and pass it to colleagues.
If I understand correctly, when a nowhere URL is pasted in a browser, what happens is:
1. the browser downloads generic JS libraries from the main site
2. these libraries then decode the fragment part, and transform it into the UI
If that's correct, someone still has to host or otherwise distribute the libraries - hence why you need the app to use it while offline (it ships the libraries).
This is not criticism, I'm just trying to get my head around how it works.
rrvsh 23 hours ago [-]
I think it still fulfills the brief; the website you are accessing is still hosted "nowhere". Very cool concept, just read about fragments on the MDN docs a couple month ago
nchie 23 hours ago [-]
But dependencies are part of a website? It literally says "Still here when the internet isn't." - but I can't go on there without an internet connection?
jdiff 23 hours ago [-]
Service Workers can cough up this stuff even without a connection, provided you already visited the site once before. This is how sites like Twitter still load their bones even without a connection.
embedding-shape 23 hours ago [-]
> Very cool concept, just read about fragments on the MDN docs a couple month ago
Crazy to hear someone reading about something today, that been around since the 90s and probably is one of the first parts you touch when doing web development, but I guess you're just another one of the 10K lucky ones :) (https://xkcd.com/1053/)
zane__chen 23 hours ago [-]
I don't see any demo.
But would this mean encoding the entire dist folder after build step?
nmoadev 22 hours ago [-]
Interesting thought to explore but overblown claims.
For the privacy claims to hold, a fundamental conceit is that you trust and use the nowhere app / domain. The source is open, so let’s imagine that you individually can be satisfied.
Now, the idea that entire apps can be shared via a link in a Signal chat or a QR code on a flier is a fascinating bit of compression and potential for archiving.
Imagine games shared on paper QR codes at a meetup.
Oh but here’s the rub, do you trust the arbitrary code you just scanned off of a QR code? TLS has become a proxy for trusted authorship. “Well if it’s really coming my bank then it’s probably safe”
terrycody 7 hours ago [-]
So is this possible that ISP or a country block this entire domain, and then you can't use the URL right?
This resembles some serverless pastebins. Data is serialized into the fragment part, and client-side JS deserializes them. The only practical difference is that this app sets them as HTML while those set them as text.
fainpul 23 hours ago [-]
Similar to mdview.io (markdown only, not offline) and a suggestion I made a while back:
This also is quite handy for inlining SVGs in CSS, although I believe you have to mark the encoding as utf-8.
22 hours ago [-]
endofreach 17 hours ago [-]
"There is a part of a URL called the fragment, the text after the #. By design, fragments are never sent to servers. They exist only in the browser, only on the device, only at the moment of access.
Nowhere puts everything there. An entire website compressed and encoded in a string of characters. There is no server holding it. There is no account it belongs to. There is no company you need permission from. The link is the site. Wherever the link travels, the site travels with it."
I am so tired of this type of phrasing... OMG, this is gonna change the world!!!
...
And we are just starting out... oh boy...
Anyone up for working on a new internet, designed for keeping stuff like this an the people who produce it, out?
A URL fragment is the part after #. The HTTP specification prohibits browsers from sending fragments to servers. The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out. No content is collected, stored, or logged. The privacy is structural.
A site that was never put on a server can never be taken off one. There is no account to suspend, no host to pressure, no platform that can decide your content should not exist. Each copy of the link is a complete copy of the site data.
Site creators can encrypt the URL itself with a password. Even possessing the link reveals nothing about what is inside.
That's great. Be sure to make these sites into a webring, so that each one can link to the next and thus to all the others.
23 hours ago [-]
embedding-shape 23 hours ago [-]
> A site that was never put on a server can never be taken off one. There is no account to suspend, no host to pressure, no platform that can decide your content should not exist. Each copy of the link is a complete copy of the site data.
Unless that site A is encoded in a format that only one other site B on the internet can decode and "serve" (even if it's all client-side) so whoever wanted to block site A would just block site B as a whole.
5t34k 10 hours ago [-]
[dead]
textninja 20 hours ago [-]
> The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out.
Let me tell you about a thing called JavaScript.
> A site that was never put on a server can never be taken off one.
If you post a link on HN and the content is embedded in the link itself then HN is the de facto server.
jdiff 23 hours ago [-]
If each copy of the link is a complete copy of the site data, how could a forum work?
oersted 23 hours ago [-]
> For orders, messages, and real-time coordination, Nowhere uses Nostr relays as communication infrastructure. Relays see only encrypted data they cannot read, arriving from ephemeral keys they cannot trace, sent from a nowhere site they cannot identify.
brazzy 23 hours ago [-]
> The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out.
Technically true, practically a lie. Because that server delivers the Javascript which decodes and presents the content, and that Javascript absolutely has the ability to inspect, modify/censor, and leak the content (along with fingerprints of the browser).
> no host to pressure, no platform that can decide your content should not exist.
Except for https://nowhr.xyz, which becomes a single point of failure for all of these sites...
wateralien 23 hours ago [-]
You download the app in case that site goes down.
24 hours ago [-]
ivanjermakov 23 hours ago [-]
LLM agent discovered plain text and base64 encoding?
immanuwell 19 hours ago [-]
By the way, there is also itty.bitty.site with the same idea
ajsnigrutin 23 hours ago [-]
What's the point?
You still have to share the link somewhere, why not just share a block of text (invitation, campaign, whatever) directly instead?
fsiefken 23 hours ago [-]
Yes! It's similar to people sharing a simple url within a QR code only. I find it insulting and inconvenient - i can remember or jot down and type in a url - i don't need a smartphone to do that.
In theory you could put a small html/website in a dense QR code, that would be truly offline - it's a similar thing.
There are also the Pico-8 cardridge format, where a game is stenographically embedded in a PNG
https://github.com/l0kod/PX8
> Private through physics. Not through policy.
Goodness, LLM really convinced itself this was groundbreaking.
You could describe a .html file sitting on your computer with all of the same marketing bluster.
Someone has to send it to you all the same, and you might as well not rely on some random internet service to render it??
https://mourner.github.io/bullshit.js/
Edit: Apparently "Platforms" => "Bullshit" ;)
> present everywhere
> Still here when the internet isn't
I'm afraid the OP may not have full understanding of how internet works. This is either some kind of a post irony, or some vibe code fever dream.
Either way, I'm deeply confused.
Besides loading the frontend resources, is there anything else that wouldn't work? Seems like a simple idea, so as long as the assets could be loaded, you'd be able to "load" the "apps", wouldn't you?
Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
Literally says in the submission title and the website itself: An entire website encoded in a URL.
And yes, the domain part of the URL might not work, but whatever URL you use locally would work just as well if you switch the domain, unless I'm missing something.
> Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
Yes, the data is in the URL, seems to be the single point of this entire project. I don't seem to find any "server" doing anything of worth here, all the meat really sits in the client-side code, which you can serve however you like, might even work through file://, haven't tried it myself.
It is very much not, open the network tab on any of the examples, behold.
Ok, using https://nowhr.xyz/s#yzXyzs8PcDbxyQ_0KbYMzzRNytKNyE0JDM0x8zT2... as found in the HN comments as an example.
Not a single one of those requests contain the string "This is a message site. I guess. Just checking.", or did I miss something? All it seems to load is the "website loader", which is the part that decodes the URL (locally) and displays you "the website".
So assuming you have local access to the loader and you have the parts from the URL, you'd be able to load it.
I'm not sure if y'all are consciously misreading how this is supposed to work, or if I'm misunderstanding what y'all are complaining about. It's not "A public internet website can be loaded if you're not connected to the public internet", it's "websites loaded in this way can be loaded this way as long as you have the loader".
https://easyanalytica.com/tools/html-playground/
2. The share svg icons look very broken.
1. the browser downloads generic JS libraries from the main site
2. these libraries then decode the fragment part, and transform it into the UI
If that's correct, someone still has to host or otherwise distribute the libraries - hence why you need the app to use it while offline (it ships the libraries).
This is not criticism, I'm just trying to get my head around how it works.
Crazy to hear someone reading about something today, that been around since the 90s and probably is one of the first parts you touch when doing web development, but I guess you're just another one of the 10K lucky ones :) (https://xkcd.com/1053/)
But would this mean encoding the entire dist folder after build step?
Now, the idea that entire apps can be shared via a link in a Signal chat or a QR code on a flier is a fascinating bit of compression and potential for archiving.
Imagine games shared on paper QR codes at a meetup.
Oh but here’s the rub, do you trust the arbitrary code you just scanned off of a QR code? TLS has become a proxy for trusted authorship. “Well if it’s really coming my bank then it’s probably safe”
Yes, it's not communicated very clearly.
https://tinyurl.com/mrpas5dc
Posted to HN in 2023
https://news.ycombinator.com/item?id=37408150
So, its just like sending your sites link through email/whatsapp or any other channel. I don't know what the real usecase for this idea could be!!!!
this works as a "url" in both chrome and safari:
For example it will give you this: https://news.ycombinator.com/item?id=47888337#47888930#:~:te...
data:text/html,<pre onkeyup="(function(d,t){d[t]('iframe')[0].contentDocument.body.innerHTML = d[t]('pre')[0].textContent;})(document,'getElementsByTagName')" style="width:100%;height:48%;white-space:pre-wrap;overflow:auto;padding:2px" contenteditable></pre><iframe style="width:100%;height:48%">
I am so tired of this type of phrasing... OMG, this is gonna change the world!!!
...
And we are just starting out... oh boy...
Anyone up for working on a new internet, designed for keeping stuff like this an the people who produce it, out?
https://github.com/kelseyhightower/nocode
A URL fragment is the part after #. The HTTP specification prohibits browsers from sending fragments to servers. The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out. No content is collected, stored, or logged. The privacy is structural.
A site that was never put on a server can never be taken off one. There is no account to suspend, no host to pressure, no platform that can decide your content should not exist. Each copy of the link is a complete copy of the site data.
Site creators can encrypt the URL itself with a password. Even possessing the link reveals nothing about what is inside.
https://github.com/5t34k/nowhere
Unless that site A is encoded in a format that only one other site B on the internet can decode and "serve" (even if it's all client-side) so whoever wanted to block site A would just block site B as a whole.
Let me tell you about a thing called JavaScript.
> A site that was never put on a server can never be taken off one.
If you post a link on HN and the content is embedded in the link itself then HN is the de facto server.
Technically true, practically a lie. Because that server delivers the Javascript which decodes and presents the content, and that Javascript absolutely has the ability to inspect, modify/censor, and leak the content (along with fingerprints of the browser).
> no host to pressure, no platform that can decide your content should not exist.
Except for https://nowhr.xyz, which becomes a single point of failure for all of these sites...
You still have to share the link somewhere, why not just share a block of text (invitation, campaign, whatever) directly instead?
There are also the Pico-8 cardridge format, where a game is stenographically embedded in a PNG https://github.com/l0kod/PX8
And the Piet and Pikt esolanguages where the visuals are the code: https://esolangs.org/wiki/Piet https://github.com/iamgio/pikt
I think its just for fun :)