This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-01-31
Channels
- # aleph (24)
- # announcements (2)
- # aws (1)
- # babashka (2)
- # beginners (46)
- # calva (15)
- # chlorine-clover (1)
- # clojure-europe (27)
- # clojure-nl (3)
- # clojure-norway (13)
- # clojure-uk (7)
- # clojurescript (16)
- # datomic (29)
- # emacs (4)
- # fulcro (16)
- # hugsql (6)
- # hyperfiddle (65)
- # lsp (9)
- # malli (3)
- # off-topic (29)
- # pedestal (1)
- # releases (1)
- # shadow-cljs (52)
- # specter (5)
- # xtdb (1)
For better or worse we've been using the OpenAI API for a while at work and it doesn't support polling, so we are forced to keep connections open until it responds and that causes certain workflows to fail (aws lambda I'm looking at you 😛), so I've been trying to think of workarounds. 🧵
Are you aware of any SaaS offerings that "asynchronize" HTTP API requests? I'm not sure what the correct terminology is, HTTP API proxy? Is there such a thing? I can't get relevant web results 😕
As a user you have two operations:
1. make a request and get back a request id
2. use the request id to get a response payload, or a status: report-not-ready
or report-not-found
Behind the scenes, endpoint 1 takes your input, sends it to a background worker and responds with an id for the request. The background worker logs the status to a db whenever it changes and the final response when it's ready.
Endpoint 2 just looks for the response in the db and if it's there it returns it, otherwise responds with an error code and the request status.
Something like that. I can't be the first one who has thought about this, it's too generic (simple even, I dare say!) but I can't find anything out there, so I'm wondering what I might be missing.
CreateVm returns (immediately) a job, then you poll a separate endpoint with the job id to know about completion
Is CreateVM a GCP thing? Sorry I'm only familiar with AWS You're right that this is common, I'm asking specifically for a managed service that will take a synchronous API and turn it into an asynchronous one, so that your apps don't have to think about managing uptime. i.e. Rather than use service A that doesn't support this, you call service B which does and let that call A.
I see! Yeah, that's what I'm looking for but specifically as a proxy to a synchronous HTTP API Something like curl as a service or something 😛
Basic question, but: have you tried turning streaming on in the API? You start getting a response pretty quickly that way.
Basic questions are fine, I'm a basic user myself! > have you tried turning streaming on in the API For OpenAI specifically? I'm not actually the person using the API, just a facilitator, so I'm not terribly familiar with this code but from a quick glance it doesn't look like it's used :thinking_face: I think they need to get the final response and pass it on to another component, so the process has to wait until the end. Unless switching to streaming mode reduces total processing time too, I don't think it will solve our problem but I'll take a closer look and ask around just in case, thanks!
Not a SaaS, but as far as AWS goes you could transfer the long-running part to an ECS/Fargate task that monitors an SQS queue or gets invoked by a Step Function or something and then posts the results to a DB/queue/webhook/whatever.
Exactly what I was thinking! Fargate + some kind of events system (because the server needs to know what the status of the request is) seems like an obvious solution 🙂 I was just making sure I'm not reinventing the wheel before I proceed
> I think they need to get the final response and pass it on to another component, so the process has to wait until the end. Unless switching to streaming mode reduces total processing time too, I don't think it will solve our problem That makes sense! I was thinking if there were a direct connection from the OpenAI API to the service that was failing, turning on streaming might act as a keep-alive / heartbeat sort of signal, but sounds like the components in between would likely prevent. Maybe just an automatic heartbeat message from the component that communicates directly with AWS Lambda? 🤷
I’m wondering how polling would solve your problems if you need to get the whole response anyway. Wouldn’t your lambda need to poll until it receives all if it? Or would it just finish and then another one would try trying to poll again?
> Or would it just finish and then another one would try trying to poll again? Yes, that's the idea!
What do you guys use for pagination? Follow some spec? Has some inspiration? Know some good example? Edit: focus on JSON with HTTP APIs.
I wrote https://github.com/ivarref/clj-paginate, a Clojure (JVM only) implementation of the https://relay.dev/graphql/connections.htm with vector or map as the backing data, some time back. Maybe that is of interest? It has worked well at my workplace.
@U2J4FRT2T Depending on context, would clojure.core/iteration
help?
@U2FRKM4TW thanks for the advice. Added an edit. Focus on JSON/HTTP APIs.
iteration
is designed specifically for situations like paginated API calls.
(for consuming them)
(clj-paginate is for serving/producing)
@U01Q1DH4682 I'm more concerned in "how to design good APIs, that developers will be happy to consume" @UGJE0MM0W gql pagination spec currently is my main reference.
Right.. So if you tell your consumers that an endpoint gives the result in the form of the gql pagination spec (and you've implemented it correctly :)), they can point any gql pagination client at that endpoint and consume it. That should make them reasonably happy/content, no? (Edit: I did not correctly implement the spec myself initially: https://github.com/ivarref/clj-paginate/tree/main?tab=readme-ov-file#2022-09-23-0253 😧)
What is your data source? I suppose not a vector, but perhaps Datomic?
I recently used the Shopify rest api which uses the HTML Link standard for pagination. It looks somewhat arcane but handy client middleware https://github.com/dakrone/clj-http/blob/3.x/src/clj_http/links.clj will parse it, so I had a flawless experience consuming their "next" links