Fork me on GitHub
#holy-lambda
<
2022-12-16
>
whilo23:12:49

Hey everyone! I have a beginner's question. Is there a way to run a singleton service for AWS lambdas? I am thinking about hosting the Datahike transactor for a group of functions that can run their queries locally inside the lambda, but need to coordinate their transactions with the transactor for strong consistency.

steveb8n00:12:14

Doesn't really fit lambda. Single server would be ec2 or ecs

whilo02:12:45

I understand your point, but I think this is not really true. Most operations typically just query the database and then it is a very good fit for lambda functions, because they provide horizontal read scaling, a model that is very compatible with Datomic/Datahike's decoupled, scalable readers. Tne point is nonetheless that sometimes you will need to update the database and transact into it and might want to do this from your lambdas. In this case I guess it would be ideal to have a service running that would be reachable from the lambdas and well integrated. I could just deploy such a transactor into AWS, but I was wondering whether there was already a notion for such services in lambda land.

whilo02:12:50

For instance if you just want to query a static Datahike database in lambdas this would make perfect sense I think, because queries need zero coordination and can be executed in milliseconds with minimal reads from the index.

steveb8n03:12:07

agreed. read can be in parallel so suitable for lambdas. I just can’t think of a way to maintain a singleton for keeping writes in serial. maybe ec2 for writes only? or build a disk/storage format which can reconcile writes i.e. don’t need a singleton writer

whilo05:12:10

Ok, cool. Thanks for validating, I will think about it.

viesti19:12:43

@U1C36HC6N Lambda has concurrency configuration, so you could have a transactor lambda with reserved concurrency of 1 and then reader lambdas, with unlimited concurrency > • Reserved concurrency – Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function. https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html

viesti19:12:46

I have actually been thinking about this same thing for DataHike, but haven’t had the time/energy to look into it 😄

viesti20:12:38

I think the thing that set me off was that I didn’t figure out if DataHike could be backed by just S3 & DynamoDB (there was some old trial of persistence layers that used S3 and DynamoDB that I looked into the summer of last year, but those weren’t up to date with latest DataHike at that time, if I recall correctly)

viesti20:12:39

IIRC, DataHike supports SQL database as backing store, so Aurora Serverless v1 could be an option, but that has cold start in the order of ~30s seconds, which is annoying

viesti20:12:38

there’s also Serverless PostgreSQL options with better cold start (and more recent PostgreSQL versions), like https://neon.tech/

viesti20:12:57

but to me, for a Datalog database, throwing all that querying capability of a SQL database out the window and using it only as triple store feels wrong :D

viesti20:12:34

so would be very interesting to see a S3 + DynamoDB backing for Datahike

viesti20:12:33

I haven’t wrapped my head around if the querying lambdas would need to build some kind of query index in their memory, or could this query index then reside in the memory of the transactor lambda. With provisioned concurrency, one could keep such a transactor process always running even, although that incurs a cost

viesti20:12:19

anyway, I suggest looking at concurrency control, specifically Reserved concurrency 🙂

whilo22:12:31

Thank you for the contextualization. Yes, using concurrency of one could work. The S3 backend needs to be ported, but we simplified our backend, you only need to implement this protocol https://github.com/replikativ/konserve/blob/main/src/konserve/filestore.clj#L95 and not all methods are needed https://github.com/replikativ/konserve/blob/main/doc/backend.org#backing-store-protocols. So porting the old backend into a reliable backend should be an effort of a few hours max, hopefully. I don't have experience with AWS unfortunately, but I would be down to do a pairing session and make it happen.

whilo22:12:26

My take on caching would be to leave it to AWS and just wrap services with different service qualities and then pick the respective backend for your project. Datahike has native image support now, so lambdas should already be fairly fast to fire up. I would look into holy-lambda more to prepackage Datahike, but maybe it is just good enough to add it as a dependency to a project actually.

whilo22:12:59

@U06QSF3BK What would be a good test case application in your mind?

whilo22:12:20

I would probably opt for S3 first and speculate that many simple applications can cope with its latency. Maybe DynamoDB as an alternative is then a good combination for apps where you are willing to pay for the latency. But I think some experimentation with simple setups would be a good start.

viesti07:12:37

> I would probably opt for S3 first and speculate that many simple applications can cope with its latency. This sounds like a good rationale, I didn't actually have a good grounding to talk about DynamoDB, just that have seen it come up with Datomic 😄 S3 has some interesting properties, like https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/.

👍 1
viesti07:12:46

Just few weeks ago, the JVM AWS Lambda runtime (java 11 currently) got support where after deployment, it creates a VM-level snapshot of the Lambda process, and on invoke, loads this snapshot, which then avoids the slow cold start of a JVM process (in my trial of a Reitit Ring app, cold start went from ~7 seconds into 500 milliseconds). This is called https://aws.amazon.com/blogs/aws/new-accelerate-your-lambda-functions-with-lambda-snapstart/. So, native-image support isn't strictly needed for fast cold start, although I think it is good to keep the code so that it is supported. I'm not familiar with Datahike, but doesn't native-image then though prevent use of clojure.core/eval , where one could evaluate code to use in a query, for example, for explorative purposes? Not sure if this would be a use-case though, maybe one does explorative queries in some other way, than running against a Lambda-based infra.

viesti07:12:51

> I would be down to do a pairing session and make it happen. I'd be interested, just have have to get better in my time management :D

viesti07:12:42

> What would be a good test case application in your mind? I don't actually have experience in Datalog databases, but probably something that deals with aspects where Datalog is a very good choice? I guess for Lambda + S3, something that fleshes out the compute and the persistence part, but still is not pathological in that sense.

whilo09:12:58

Btw. we don't even use SQL as a triple store, just as a blob store. That is also why I think it is not a good default backend, it is very wasteful.

👍 1
whilo09:12:18

Snapstart sounds cool, but 500 ms is still quite some time.

viesti09:12:34

depends on the app, can be lower

viesti09:12:54

and that is just the cold start, when lambda has the process running, response times are lower

viesti09:12:26

especially after jvm hotspot kicks in

whilo09:12:52

I don't think we need eval, I think almost all Clojure applications using Datahike can be natively compiled.

steveb8n09:12:27

I'm building on snapstart and learning lots. Touch base if you want more info

whilo09:12:33

Ok, it is very cool to have options for sure. I just want to aim for the most simple setup that is resource efficient, but maybe not fastest.

viesti09:12:53

yup, I think I was going a bit too far, I haven't used Datalog databases, so was wondering how people do explorative queries, but I guess that happens at the repl, not in the deployed app 🙂

whilo09:12:59

Yes. I am a fan of JIT compilers and interactive setups, but native image compilation provides interesting options to scale out like this.

viesti09:12:00

but I guess the interesting thing is that if a transactor lambda with reserved concurrency would fit the singleton transactor requirement

👍 1
viesti09:12:50

for that S3 store I'd go for plain AWS Java SDK v2

whilo09:12:35

Also, not sure how important this is, but I have not tried AOT compiling Datahike lately on the JVM.

whilo09:12:03

I am a bit worried about firing up the JVM to run a query to be honest.

whilo09:12:23

I think the latency impact will be seconds.

whilo09:12:38

And the compute spent massive compared to what the query execution costs.

viesti09:12:50

depends on activity

whilo09:12:54

(for simple queries)

viesti09:12:05

if there more following queries, it'll be efficient in the long run

whilo09:12:39

I see, but often you have to consider the worst case latency for your app.

steveb8n09:12:46

Warm JVM is approx 2x faster than graal native

viesti09:12:49

you need AOT to be able to do native-image, so I guess AOT for Datahike works then? 🙂

whilo09:12:00

I guess so, too.

steveb8n09:12:21

Although that could improve given the size of the graal team

viesti09:12:22

the Snapstart freezes a Firecracker VM process, so what wakes up, is a warm JVM

viesti09:12:43

freezes during deploys, then on cold start, thaws it

whilo09:12:57

I think they should just turn the JVM into an OS at this point.

steveb8n09:12:58

Not entirely but pretty close

viesti09:12:10

heh, they would be tied to JVM then 🙂

viesti09:12:40

I'm expecting Snapstart to be available for other runtimes when they figure out how to offer stable random numbers, that don't get frozen

steveb8n09:12:05

However ssl connects are slow first time due to handshake. Snapstart can't fix networking

steveb8n09:12:36

All AWS APIs are ssl calls e.g. S3

viesti09:12:08

https://docs.aws.amazon.com/lambda/latest/dg/snapstart-uniqueness.html, there's a scanner that operates on bytecode level to check for patterns that one would want to avoid with Snapstart

steveb8n09:12:15

I'll test holy lambda vs snapstart soon

viesti09:12:35

(the findbugs successor)

viesti09:12:30

I'd think this snapstart would be great for say ML stuff, where you'd load a model in memory, then freeze the process, then thaw it upon first request and do inference

steveb8n09:12:42

It's excellent for CPU bound tasks. Still figuring out how to use it for ssl calls ie AWS APIs

whilo09:12:40

Deep learning requires a lot of GPU memory, just loading this will always be slow in current stacks.

whilo09:12:14

What is the best library to use to implement the S3 backend for Datahike?

whilo09:12:35

I could try to use the Java API directly.

👍 1
whilo09:12:00

Ideally I would like to have an API that can also be used asynchronously, e.g. with callbacks for the http requests.

whilo09:12:20

We have a dual async/sync stack.

viesti09:12:36

I think I would also use the Java API, just make sure to use the v2 SDK 🙂

👍 1
steveb8n09:12:45

I used v2 java SDK with holy lambda. Worked well.

👍 1
steveb8n09:12:11

Also give you a choice of http clients

viesti09:12:41

not sure if it's necessary here, but that java api has more support for things like multipart download, efficient syncing of large data, but we don't need that here. Generally I think it tracks new S3 features well, and has pluggable http client support (aws has their "common runtime" which is a optimized C library I think)

whilo09:12:18

Ok, cool. Thanks!

whilo09:12:44

If you feel like pairing over it, lmk.

👀 1
viesti09:12:14

those aws java sdk libs ship with native-image configurations, haven't looked into how much they matter, but they make effort to have the libraries graalvm native-image compatible

whilo09:12:34

Cool! That is good.

viesti20:12:31

Apropo, when developing, you can use for example Minio via docker image, it has good support for the S3 API, so one cam use the AWS Java SDK against Minio. https://min.io/docs/minio/container/index.html Continuing that thougth, a S3 backend for Datahike would allow to use any other object store that implements the S3 API, which I think is interesting.

viesti20:12:06

I think I'd be interested in a pairing session, I just don't know when and might be the slower one that benifits most :D

whilo23:03:32

I have implemented https://github.com/replikativ/konserve-s3 and https://github.com/replikativ/datahike-s3 taking inspiration from @U0510KXTU’s link above. I still need to figure out how to release the two projects with our deployment pipeline, but you can just use the github SHAs in deps.edn for now. @U06QSF3BK I would be down to pair and see how you would wire it up with lambda if you have some time. I don't have enough time myself right now to get into holy-lambda and the AWS stack unfortunately. Latency is as expected higher than with local storage, but if you do not write a lot, caches can stay warm and queries would perform with one roundtrip only (checking whether the DB has changed). Maybe latency is also much better if you access from AWS directly.

🎉 2
👏 2
whilo23:03:17

Lmk what you think 🙂

whilo23:03:46

Both repos are released now, so you can just depend on datahike-s3 and datahike and develop against that.

whilo23:03:59

I would be in particular interested to have MVP example with one lambda covering the transact function to S3 (which is guaranteed to only run once at a time) and some query in another lambda. If you can help me setup an example project for that I would be very happy.

viesti05:03:18

Awesome! :)

😊 2
whilo09:03:15

With this PR we now have close to optimal latency and automatic transaction batching under backpressure https://github.com/replikativ/datahike/pull/618 (I still need to clean it up a bit for it to be merged). This should benefit the S3 backend the most as it has high latency, but can handle high throughput. Any suggestions of how I could build a demo project as simple as possible, ideally as a starter template for holy-lambda?

👍 2
viesti09:03:39

I have some ideas but haven't gotten them out of my head :D

viesti09:03:14

starter template is a neat idea, I think that if the template is serverless, then it should contain the single writer setup (which I haven't yet gotten around to try out), which is in a significant part, about creating the infra with some tool (I prefer terraform)

viesti09:03:18

also although holy-lambda does great things to make life easier when using native-image, now that https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html is around, the actual making of lambda could be simpler with just using the jvm11 runtime and making a class that implements the lambda entrypoint requires by that runtime so in a template, holy-lambda kind of optional even but, if the demo would be an app with a frontend and say a rest api, then the ring adapter in holy-lambda is really useful

viesti09:03:49

but I guess first have to go and try that single writer setup :)

viesti09:03:19

S3 backend also useful outside lambda I think, but should definitely be tried out in a Lambda :)

whilo09:03:06

yes, it is interesting for us, because we cannot easily offer a hosted service right now, but might get contracts and support by offering datahike on lambda (that is just a guess by me) and it is good starting point to then offer the writer as a EC2 instance

viesti09:03:37

ooh, interesting :)

whilo09:03:43

i am fine also with snapstart, honestly i am n00b on aws, my mind is mostly on distributed persistent data structures

viesti09:03:46

too crowded and not enough time for one mind to contain it all :)

whilo09:03:25

yeah, i can maybe figure it out between 30 and 40 o'clock 😉

whilo09:03:43

but i think i should learn a bit more about it now

whilo09:03:29

so i would be done to also take pointers if you are super busy or pair if you have some time at some point

viesti09:03:26

basically, the aws's own jvm11 runtime (suggest vjm11 over jvm8) takes a uberjar with a class that implements com.amazonaws.services.lambda.runtime.RequestStreamHandler interface, found in com.amazonaws/aws-lambda-java-core {:mvn/version "1.2.1"}, which you need to include into the uberjar

viesti09:03:55

implement that and you have a uberjar with a lambda compatible entrypoint

viesti09:03:26

make a lambda on aws console, upload the jar, then name the handler class, sounds a bit minimalistic, but that's the start 😄

viesti09:03:15

not sure if it helps, but my mind was focused on hacking on a bit different thing, there's some bits that you could steal from this, if it helps and terraform suits you :) https://github.com/viesti/clj-lambda-sideloader/tree/main/example

viesti09:03:08

been incrementing a slack reminder couple of weeks for a weekend to look into this datahike thing 😄

whilo09:03:07

hehe, no worries

whilo09:03:21

thanks, these steps sound doable

whilo09:03:22

what is crac stuff?

whilo09:03:58

to get started with datahike it should be enough to copy this snippet and use it in a project with your S3 settings https://github.com/replikativ/datahike-s3#run-datahike-in-your-repl

👌 2
whilo09:03:13

if not then i need to fix and simplify it

viesti16:03:42

Hmm, tried it out a bit, d/delete-database seems to delete the whole bucket, which was a bit unexpected I think 🙂

viesti16:03:23

I’m thinking that could there be a prefix in the store configuration, and then the prefix would be used to “name” a database, so you could then remove all files under a prefix if needed

viesti16:03:19

I think S3 buckets are quite long-lasting things, re-creating a bucket with the same name (if you deleted it accidentally) can take some time, since AWS reserves also DNS name for a bucket

viesti16:03:32

soetimes at least

viesti16:03:00

but anyway, managed hello world in lambda yay, can put the code & terraform to github soon

viesti16:03:08

ah, the other thing, deleting a bucket is quite, hmm, heavy operation, one would not want to grant that for a backend (though I failed to limit delete access in my test 😄)

whilo17:03:27

that is awesome!

whilo17:03:54

i agree about deleting the bucket and will look into prefixing keys

whilo18:03:20

how would you carve out the singleton lambda for transact? i think splitting the example into a transact lambda and two different query lambdas would be a good starting point for a template

viesti18:03:16

we could also have same lambda source code, but say an environment variable that toggles the deployed instance to work as transactor or query node

viesti18:03:50

so deploy two lambda function instances, but configure them differently

whilo18:03:44

that makes sense

whilo22:03:03

i fixed the bucket deletion issue with datahike-s3 0.1.8

viesti05:03:49

Nice! 🙂

viesti15:03:18

Took a look, I think I forgot to say, that it might be neat to be able to specify a prefix, so you could have multiple databases in a single bucket, something like

{:store {:backend :s3
         :bucket "datahike-s3-instance"
         :prefix "my-db-1"
         :region "us-west-1"}}

👍 2
viesti17:03:30

tried out with separate writer and reader lambdas, but what happens is a bit interesting

0% bb run write '{"data": [{"name": "Alice", "age": 32}]}'
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":"ok","status":"ok"}
0% bb run read
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":[[3,"Alice",32]],"status":"ok"}
0% bb run write '{"data": [{"name": "Bob", "age": 42}]}'
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":"ok","status":"ok"}
0% bb run read
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":[[3,"Alice",32]],"status":"ok"}

viesti17:03:01

so what is going on here is that the reader lambda has stale db reference, since it doesn’t show the write that the writer did

viesti17:03:40

so, one could have concurrency=1 lambda, all reads & writes through same process, lol

viesti17:03:59

but, how should one exactly use many reading processes with datahike?

viesti17:03:31

is there a way to tell to to datahike to “go refresh caches from the persistent store”

whilo17:03:08

oh sorry, there is one boolean flag streaming? that needs to be changed

whilo17:03:49

this can be done by using a different config for the query endpoints

whilo18:03:15

injected this for the query connection before you use it (swap! (:wrapped-atom conn) (fn [db] (update db :writer #(assoc % :streaming? false))))

whilo18:03:39

that forces the connection to refetch from the underlying store every time you access it

whilo18:03:41

There will be a cleaner way to do this through the config.

viesti18:03:53

oh nice, I’ll try that 🙂

viesti18:03:20

How long has that option been around? I think I was looking for something like that maybe 1-2 years ago

whilo18:03:59

the PR was merged last week 😅

whilo18:03:28

currently it sets this when you have a remote transactor in form of datahike-server

viesti18:03:56

well, and at that time, s3 backend wasn’t around, which was the thing that I was actually looking for 🙂

viesti18:03:49

well I’ll be damned, I guess it worked!

0% bb run write '{"data": [{"name": "Pedro jr", "age": 15}]}'
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":"ok","status":"ok"}
0% bb run read
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":[[6,"Pedro jr",15],[5,"Pablo",55],[4,"Bob",42],[3,"Alice",32]],"status":"ok"}
0% bb run write '{"data": [{"name": "Pedro", "age": 59}]}'
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":"ok","status":"ok"}
0% bb run read
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"result":[[6,"Pedro jr",15],[5,"Pablo",55],[4,"Bob",42],[3,"Alice",32],[7,"Pedro",59]],"status":"ok"}
both Pedros visible after each read

whilo18:03:13

❤️❤️❤️

whilo18:03:21

this is awesome!

whilo18:03:34

finally all the nights spent pay off 🙂

whilo18:03:24

what do you think?

whilo18:03:34

i have a PR for datahike that significantly reduces write latency btw., and could do auto batching in case we run the transact calls async in the lambda https://github.com/replikativ/datahike/pull/618

viesti18:03:59

walking the dog outside, -2 and fingers freezing, still a bit bewildered and glad that I could help, thinking that would need to do some demo with frontend, say todo list :) then also thinking about a perf suite and snapstart setup for eliminating cold starts

viesti18:03:39

to really have a serverless database, even datalog style, for Clojure, is just wicked :)

whilo18:03:25

fortunately freezing stopped here in vancouver already 🙂 my partner in montreal is still freezing though

viesti18:03:39

here in Finland winter came back, but probably only for a short while :)

whilo18:03:04

still have to visit finland unfortunately, never did it when i lived in germany

whilo18:03:24

winter in vancouver is not as cold, but very humid, so the cold sticks

whilo18:03:00

what you say makes sense. with anything you can help i would be super grateful, as i am thinly stretched atm. also with my AI research (which hopefully i can integrate into Datahike as probabilistic inference)

👀 2
whilo18:03:37

i also need to do sales again as soon as there is something interesting to sell 🙂 atm. we do not make a lot of revenue with datahike and that slows its development

viesti18:03:54

I'm surprised in a positive way that Datahike can provide revenue :)

whilo18:03:43

yeah, we were somewhat lucky. we suck in sales

😄 2
viesti18:03:48

should make some noise somewhere about this lambda trial :)

whilo18:03:02

but i also needed to first get the distributed use case done before i wanted to go out and pitch it

whilo18:03:37

i would write a blog post as soon as we have a project template that people can use to build prototypes and small apps

whilo18:03:32

is it possible to fetch and process multiple requests in lambda that return asynchronously?

viesti18:03:17

lambda is event by event, although there is async invoke to dispatch without waiting but then the event size is quite limited

whilo18:03:56

ok, that is our business case for a server then

whilo18:03:21

the server can process multiple requests in parallel and batch them, which gives you better scale on S3

whilo18:03:49

it is particularly helpful on S3 because of the high latency, it helps in general ofc.

whilo18:03:04

or do you think there is a better approach?

viesti18:03:56

i wonder putting write request to say sqs or another queue supported by lambda and then batching off the queue

whilo18:03:59

that would also do probably, i have no experience with this

whilo18:03:31

i think you want tx responses though, so the client needs to be notified only after tx call

viesti19:03:18

> By default, Lambda polls up to 10 messages in your queue at once and sends that batch to your function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a batch window. Before invoking the function, Lambda continues to poll messages from the SQS standard queue until the batch window expires, the invocation payload size quota is reached, or the configured maximum batch size is reached. > https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html it's been some time I did these things, I remember there being some batch size and window size tunable with stuff like Kinesis Firehose, which allowed to put event processing into lambda, when going through Kinesis

viesti19:03:59

these serverless things are kind of, well, you don't configure a single server, but a host off services :)

viesti19:03:15

when inside aws, lambda talks quite fast to nearby aws services, but yeah, some kind of benchmark would be neat

whilo19:03:14

it might be nonetheless reasonable to just run an EC2 instance for transact then, not sure how the prices of the services compare

viesti19:03:54

if you would have enough traffic that lambda is kept running all the time, then ec2 is cheaper, but it gets more complex, since nowadays you can even buy compute capacity for lambda upfront and benifit from discounts the same way as it was for reserved instances of ec2's or databases

viesti19:03:58

for on anf off traffic, this kind of setup with Lambda doing writes, with fast enough cold start, is appealing

viesti19:03:23

so depends on the use case I think

whilo19:03:01

right, i will think about it

whilo19:03:13

one thing that is nice if we can also host our setup in other environments

whilo19:03:28

S3 support is now fairly general in many environments and we have other store backends

whilo19:03:41

there are also other lambda runtimes that we probably can cover

whilo19:03:28

how would you like to proceed from here?

viesti19:03:03

will proceed to bed now :D, but with other lambda runtimes you probably mean say GCP Cloud Run, since the other JVM option in AWS would be a custom runtime. I tried GCP Cloud Run when it came out, it probably has advanced since, I think it even has an option to keep the compute that runs the process "warm" without throttling, as opposed to Lambda, where the process runs only when an event is processed, otherwise it is frozen, so you can't do background processing, can only execute while handling an event, though there is upper processing limit in cloud run too I think I"d want to setup snapstart for the aws lambda next, not sure what after that, some kind of write and read benchmark would probably be neat, read side scaling is interesting, but would have to figure out suitable benchmark scenario, does datahike have benchmarks available?

viesti19:03:34

but off to bed this side of the globe now :)

whilo20:03:11

have a good night! thanks for all the input 🙂

whilo20:03:19

we have https://github.com/replikativ/datahike/blob/main/doc/benchmarking.md, but this probably needs to be adjusted a bit

whilo20:03:43

i think it is also fine to just write up a synthetic benchmark of your own to get started

👌 2