This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (6)
- # beginners (110)
- # calva (18)
- # clj-kondo (19)
- # cljs-dev (27)
- # clojars (10)
- # clojure (38)
- # clojure-art (2)
- # clojure-europe (13)
- # clojure-germany (1)
- # clojure-norway (26)
- # clojure-uk (2)
- # clojurescript (10)
- # conjure (9)
- # cursive (12)
- # data-science (3)
- # datomic (22)
- # emacs (8)
- # helix (9)
- # honeysql (18)
- # introduce-yourself (1)
- # jobs (1)
- # leiningen (8)
- # lsp (22)
- # missionary (9)
- # nbb (11)
- # off-topic (83)
- # pathom (5)
- # pedestal (4)
- # polylith (1)
- # portal (1)
- # re-frame (3)
- # reitit (15)
- # remote-jobs (1)
- # rum (4)
- # shadow-cljs (88)
- # specter (12)
- # testing (1)
- # vim (39)
would it be possible to have a streaming service that is totally decentralized relying only on peer to peer bandwidth sharing?
The problem with streaming is that your data has to fit into a single computer box. If you see things like Ethereum they have data amounts that fit into a single box (HDD it's still intended to be in one machine; and they also do trusted chain snapshots). The more data that you want to stream the less likely it will fit into a single box. To mitigate this you could rely on solutions like IPFS to provide actual streams and put the digests on your chain which would fit into a single machine. But then you wanted a stream service and not file delivery system which means that you need to chunk these files and calculate hashes on those chunks (this will also increase download speeds). So now you'd be in a chunking business. How to find a good chunk-size balance to reduce the size of the paid chain but also keep the speed of the downloaders? (Assuming you would be sending your stream to more than one person).
If it's a video stream then you can do quality conversions. So when you have many chunks then you can downscale them and interleave the video streams to the user. If these streams are financial data then it's probably something different.
only data about the nodes that provided bandwidth & money allocated to them for their service
Twich lags for about a minute. I have no idea about what kind of "won't be persisted" you talk about :thinking_face: You could delete your files "soon". But I'm not sure what kind of "not persisted" you talk about. When data is sent it's still stored in wires and in routers. Even if you think it "moves" it's still saved. So probably you would want to have 1-minute chunks that you would load from the storage.
one of the (many) reasons I get annoyed with cryptocurrency projects that try to do the "what if X, but decentralized and with a token" thing is because they suck all the oxygen away from efforts to achieve similar goals without the waste of permissionless blockchains. one such project that aims to make data content-addressable in a way that would support a distributed approach to data access that https://en.wikipedia.org/wiki/Named_data_networking#History cryptocurrencies is https://named-data.net/. It's had substantially more thought put into it by people actually involved in the design and implementation of the modern networking stack than the crypto-token hype projects that claim to do similar things.
A token is just a value expression mechanism that predates cryptocurrencies and the internet and.. well, a lot of things. Using cryptography and distributed networking to exchange tokens does not necessarily make tokens a bad idea and using tokens doesn't make an idea bad.
I think Filecoin is just an incentive layer built on top of https://en.wikipedia.org/wiki/InterPlanetary_File_System. NDN and IPFS seem similar.
But from a quick glance, I can't see what NDN does to encourage the named content to remain hosted. I guess it's out of scope, and that's the particular aspect of such an architecture that Filecoin aims to solve.
as a matter of fact what you're describing is each packet being treated somewhat as a NFT
blockchain systems on the other hand are a protection against 'network partitioning' in order to emulate 'ownership' & 'binding contracts' qua building a totally ordered immutable log
i don't see how anyone would be incentivized to run a service such as providing bandwidth, if they're not compensated for it
one vision is that of a multitude of data producing/consuming nodes that meet in a data market
If "missing the crucial business aspect" means decoupling the questions of "how to do it" and "who will pay for it" in considerations of network + technical architecture then I'm all for it. It's fine if the technology doesn't answer that question, just like it's fine that I don't have to worry about who's paying for users' hardware when I release a software library. Experience is proving to show that attaching a cryptocurrency to a distributed network is a great way to flood it with scammers and create a financial bubble at record speeds. I'd rather not hitch my star to that wagon.
A regular old market can and will continue to meet the storage needs of most people for some time yet. Cryptocurrencies are having a pretty difficult time showing that the "decentralization" they provide does anything other than create new intermediaries while wasting loads of computational resources while providing a network that is far less stable and secure than conventional institutions.
I think the "how to do it" and "how to pay for it" are orthogonal issues and it's not a bad thing that there is effort in both places. Crypto is definitely rife with scammers but that doesn't change the fact that at its core, the idea is simply the combination of distributed networking, cryptography, and game theory.
This is the last thing I'll say on this: if we're going to make the case that the problem is in the implementation rather than the core of the idea, I wonder exactly how many more people need to get scammed as cryptocurrency hackers "test in production" before someone finally lands on an implementation that's not a complete mess. https://web3isgoinggreat.com/
I'm just making the case that the idea is an abstract thing and its outcome is difficult to predict. Happy to continue the conversation anytime you are @UFTRLDZEW 🙂
this whole tech is in its infancy & we have not even begun to imagine the fruits it will bear
look at the issue of identity for instance. right now user identity is local & walled behind a trusted system with control over it being delegated to the user by the trusted entity.
soon people will realize the power of public-key cryptography and global notion of identity that does not depend on any particular system.
it will turn the "users" at the mercy of trusted systems into "actors" with full control over their identity, data, privacy, ...
not to mention decoupling of the notion of 'legal binding' from geopolitical jurisidctions.
one could make the same argument regarding "scammers" about indie game developers too.
with the rise of platforms like steam & crowdfunders & ... a massive number of scammy garbage was produced
but noone in their right mind would deny that the value created dwarfs the garbage output
in any case, i don't see people pointing fingers about the travesty of current financial systems & judicial frameworks, economical crisis, underdeveloped world sinking deeper into debt, upcoming wars & ...
million-times the whole market-cap of crypto is wasted by institutional investors & financial manipulators
a lot of tech people fail to appreciate crypto because of limited knowledge about political economy
'decentralized' identity is an instructive example: the people working on it appear to completely lack a consistent concept of what "privacy" and "control" even mean, as argued convincingly by Molly White: > People are already talking about capturing enormously sensitive information in digital form, issuing attestations about other individuals either with or without their consent, and, in some cases, recording all of these things to immutable blockchains where they would be stored indefinitely > https://blog.mollywhite.net/is-acceptably-non-dystopian-self-sovereign-identity-even-possible/ I personally do not want to have sensitive information about individuals recorded on an immutable tamper proof log that lacks anyone that can be appealed to when things (inevitably) go wrong. There's a lot that's wrong with existing institutions, but I'd rather not create new ones with flaws frozen in time by code and immutable ledgers.
Decentralized ID can become more interesting when zero knowledge proofs are applied.
Sorry. the state is not going away, regardless of where the data is stored. we've had machines that perform contracts for quite a while https://youtu.be/JPkgJwJHYSc?list=PLUl4u3cNGP63UUkfL0onkxF6MYgVa04Fn&t=2026 in every contract, there are not 2 parties involved, there are (at least) 3 (the state)
You can have consent centric networking and an identity model without crypto :man-shrugging: But it might have baggage attached 🙂
While the last thing I want is a government provided and controlled internet, I agree regarding Tue incentive structure. I just fall on the side that sees the internet as fundamentally broken. Money may be fake, but all abstractions are. They can still be useful, especially if we don't break them down to nonsense
I haven't watched that video yet but intend to. The state definitely is not going away, but it will continue to play catch-up with technological innovation. I like the https://www.usv.com/writing/2016/08/fat-protocols/ idea for the long term. Things just tend towards openness. The ability to produce and publish is reaching more people over time. I think it's natural that along with our ability to share ideas, we will be able to make value expressions about those ideas, and the entire medium by which we do so won't necessarily be fully controlled in a top-down manner.
I just had one of those "why didn't I google this before" moments... how to tell
git rebase to always solve merge conflicts in a fixed direction https://demisx.github.io/git/rebase/2015/07/02/git-rebase-keep-my-branch-changes.html
(for my use case it's the opposite strategy than the article's, i.e.
I've found much value in taking many more much smaller steps. So I'd like to recommend GeePaw Hill's podcast about just this! https://www.geepawhill.org/tag/podcast/
Are you taking small steps when you're coding? Or large steps? Any disadvantages with small steps?
For me, most of the time, I find myself making steady progress with tiny baby steps in design and coding. I usually write down todo list for my coding tasks to force me to think deliberately what are the optimal steps to approach coding. However, rarely, when I'm in "flow" or too tired to be deliberate, I'll just code. The outcome is uneven. Sometimes, exceptionally good, but usually, just lousy.
It depends on the stage of the project. I think in the beginning you don't care that you have a lot going on. Also if it's a large refactoring you simply can't move in small steps.
The last one is https://www.geepawhill.org/2022/06/29/ten-i-statements-about-change/ which I don't find it relevant to coding steps. That's why I asked.
Episode 133 is a gold start: MMMSS – A Closer Look at Steps https://open.spotify.com/episode/7CyXCbpBzlt3TaXbpy0Rqp?si=QyPOyfo6TJiWRwm0WbFXwQ&utm_source=copy-link Or episode 122: Path-Focused Design https://open.spotify.com/episode/6NATIiuwBAG4ek8YjwMv4Y?si=QOxqQFViQr-1Gm_zCvMqAQ&utm_source=copy-link
After listening to the podcast 130. I know now that his concept of "steps" has special meaning of developing software from a working (steady) to another working (steady) state. It's not the usual steps of task execution.
What are the reasons to coerce an http response body to a byte stream vs byte-array vs text/string as is an option with http-kit here? I imagine that text means you have to read the entire thing into some memory space, which might be problematic (though i would image controlling the payload is mostly a networking task?). https://http-kit.github.io/client.html#coercion
Byte Stream • doesn't require reading the full response into memory • can partially read or completely ignore contents • allows for techniques like long polling • allows for backpressure Byte array • can ignore the mechanics of reading the response and just deal with full response once it's ready string • assumes text (not all responses are text) • potentially, assumes a particular string encoding.
thanks for the explanation. I'm not sure which will matter in my case yet so ill probably start with string.
byte streams also can make bytes available sooner (when paired with a streaming decoding of whatever you're dealing with)
total throughout does not necessarily change (and sometimes is worse!) but sometimes that little bit of latency matters