This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
I watched a few Datomic Ions videos and actually tried the solo topology. So far so good. However I’m looking for a guide on multi developer story that’s similar to traditional CI/CD pipeline so that we can use hosted git repo (say github) for code review and trigger deployment on selected branches. All docs seem to be showing pushing & deploying code from a developer’s local dev machine. Am I missing anything? Thanks!
@rlhk.open I haven’t set this up yet, but I always imagined those push/deploy commands could easily be added to a CI build script. They are, after all, just Clojure functions.
I imagine the challenge would be around getting the appropriate credentials in your CI pipeline to execute the push and deploy commands
I’ve used environment variables for AWS_ACCESS_KEY and the like to do that. I had a special “Heroku” user in one of my projects that would perform the commands. It wasn’t Ions-based though. Maybe there is an included role that ships with Datomic for attaching to CI users.
yeah. it doesn’t sound more difficult than the way we build and deploy to AWS without ions
Indeed. I can imagine a small Clojure script containing a map of git branch to compute group, calling the properly configured push/deploy commands, all based on environment variables. I know CircleCI exposes a
GIT_BRANCH var for doing this kind of scripting, for example.
The datomic.ion.dev namespace should offer pretty flexible scripting capabilities.
one use case that I think datomic feels very well suited for is managing content. am I wrong?
thinking about requirements like auditing/tracking changes, reverting and rolling back changes, diffing changes…
plus the open information model seems extremely well suited to having the capability to define ad-hoc content “models” (e.g. an
I’ve tried to do the latter in a more static, relational way and you always end up just throwing some JSON in a column and call it a day 😕
@cjsauer & @lilactown thanks for the thoughts, which are reasonable. Didn’t investigate myself but hopefully not too difficult to be able to hook up Github directly from within AWS services.
Hey, I tried asking in #beginners, but apparantly too specific of a question. I am starting to look at datomic using the datomic client API and am looking at how to manage my schema over time. Now, when using the peer API I could use conformity for schema management but that doesnt support the datomic client API… Now I was wondering how other people managed their schema on datomic. It can’t be that you manually transact the schema and never migrate right?
I think one reason you’re not finding as much info about migrations using Datomic vs. other relational DBs is because they’re not as strictly necessary with Datomic’s open information model
the datomic cloud documentation has stuff about changing your schema: https://docs.datomic.com/cloud/schema/schema-change.html
since there’s a programmatic API to datomic and you already have the ability to atomically transact & revert, I could see a pretty simple wrapping API to ensure those changes have been made
you might even get away with just updating your schema definition and transacting that each time your app starts up ¯\(ツ)/¯
Yeah, I’d see so too, but it feels like this is a problem I should not be the first one to encounter so to say. Managing schemas is a very “core” problem, don’t you think?
yeah, but migrations are not only about adding fields, its also about maintaining consistency. Or — it is in the most frameworks I have seen. I may be looking at it wrong again tho
I think schema migrations cover several problems that are worth thinking about: 1. Updating the database schema to reflect new information we may want to track 2. Updating rows/columns to with derived or filler data in order to match new schema 3. Tracking changes to the schema for audit purposes 4. Allow easy revert / rollback of changes 5. Propagating these changes across databases (e.g. moving from staging to prod, we want to ensure the same changes are made)
also, a reconstructable schema as it is the sum of all migrations. And, migrations themselves serve as a history of your database schema, which I feel is a different level than history of its contents.
But, these problems are solved in most database frameworks right. What I am saying is, I think that these issues are either not solved in the datomic world (unlikely!), or there is a reason so compelling that they don’t have to be solved. I am looking for either, either a library or workflow that solves these concerns, or a compelling reason why we shouldn’t care about these issues.
And, given the focus datomic appears to have (from its documentation) on a schema, I currently don’t see such a compelling reason. But; once again, I am a newbie in clojure so I must miss the bigger picture here
I think that because Datomic operates at the attribute (not table) level, most changes end up being backwards compatible. So you can just have a
schema.clj somewhere that transacts your currently-used schema on start, updated as needed and checked into git, which covers the 80% case.
there’s another library, called datofu, which also has mechanisms for migrations, but also speaks to why you might not need them: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions
but anyway, I should probably let other people with more experience guide you since I’m still dipping my toes into Datomic as well 🙂 merry christmas eve!
@lennart.buit I ported schema ensure from the peer API rather easily. Here is a gist of the code and a small sample: https://gist.github.com/cjsauer/4dc258cb812024b49fb7f18ebd1fa6b5
ensure-schema function would presumably be run on every app startup. Transactions only occur if the schema doesn’t already exist in the database. As said in the datofu read me linked above tho, schema transactions are idempotent, so there’s really no penalty to just transact the schema on every app start...
3 is checked naturally by Datomic’s historic capabilities I would think...as well as being checked into source control. 4 is sort of contrary to Datomic’s “accrete only” best practice, but I might be misunderstanding. 2 is left unchecked, yes. I imagine the ensure function’s contract could be augmented to include derived data...I haven’t attempted that myself though. @lennart.buit
3 refers to the change of schema instead of the data the database contains. That is indeed (partially) stored in a vcs, but only for attributes added. Changes made to the data in the database as a result of the schema being changed are not recorded. I personally see a difference between schema and data migrations, and sometimes they even go hand in hand.
4 is usually used for undoing borks. “Oh god deploy broke production, better revert”
2 is a hybrid schema/data migration which I think is most interesting. For example when you split a field in two (think “full-name” -> “first-name” / “last-name”) you would want to have a structured way to do so across stages right (dev/staging/production) other than “just doing it in the repl”.
I believe one of the core principles of Datomic is that you don’t change the schema, but instead only grow it. If you realize you modeled the data in a less-than-ideal way, you should deprecate (but leave untouched!) the old schema and content. If for example an attribute is found to be a one-to-many relationship, rather than one-to-one, that is a new attribute, and the old schema and data must remain in order to guarantee backward compatibility. This may be why migrations don’t get as much attention around Datomic...
how so? splitting, without removing,
full-name is something that is both a schema change (add
last-name/`first-name`) and a derivation of data right?
(lets skip over the fact that splitting a name to first/last is a very hairy problem)
Well, the schema change is one operation, and going forward, your UI might start prompting new users to enter separate first/last names. So in that sense, the schema change is “simple”. Reshaping the old data is, as you mentioned, a very hairy (and separate) problem. The simplest solution in my opinion for a problem like that is documentation...it’s now just a known fact in the system that users created before December 24th, 2018 were using the full name field. It might not be the most attractive solution, but it’s surely “correct”.
So, I suppose it’s a different philosophy, and one that permeates through Clojure rather deeply. It’s why you see libraries like clojure.spec remain in alpha for a very long time...design work is difficult, and it’s important to tease apart the data model upfront, because “fixing” it retroactively is nye impossible.
And when I say “documentation” above, Datomic will help you with that.
:dB/deprecated is a first class attribute, and you can transact a reason for deprecation, as well as a “use first/last name instead for new dev” as well.
well, they are definitely separate problems in that sense, one operates on a schema and one on the data. However, in traditional database design, these changes may operate on a different level (one on the “meta” schema level, the other on data), they are usually executed simultaneously and atomically. As if the world changed right under the feet of the application. Coming from such a background right, I see all sorts of crazy coming from not maintaining the strong invariants I would enforce in a traditional relational db. Think: “everyone has a first name and a last name”.
but it wouldn’t be the first time my perspective on software engineering would differ from what is custom in the clojure world
In Datomic you’d have a much easier time shifting your view to “everyone has a first and last name as of December 2018, and a full name before that”. I absolutely sympathize with the desire to “fix” the old data, however...”legacy” is a four letter word in software development haha.
> (please don’t think I am discrediting your points! I enjoy this discussion) Of course :) I myself came from the “table/SQL” world, and so Datomic and Clojure practices are still a fun learning experience.
yeah, legacy is … uhh, the bane of our existence. But I have conflicting interests in trying to get things working, and realising I misunderstood the problem before. If I would ‘move fast and break things’, I would inevitably make lots of mistakes and therefore would need to migrate/deprecate. If I would not move fast, shipping may suffer.
I used to think, before coming to Clojure, that the only way to ‘solve’ these issues is by accepting that errors happen and have processes in place to break and not be stuck with all that legacy
Yep, the eternal struggle...perfection is always at odds with shipping. My entire career feels like one big lesson in finding the “sweet spot”. Clojure certainly approaches it. I think ”move fast and break things” is still a valid strategy behind the curtain (non-prod), and this is where the REPL really shines. Hammock time coupled with quick experiments at the REPL is an awesome development flow. I find myself in “the zone” pretty often this way.
Rich gave a good talk where he hinted at versioning at the function level. Something like “if you plan on breaking the
foo function’s contract...don’t. Just create
foo2, leave a note, and get on with your life. You don’t need a new name, and you don’t need to break anyone...just allow them to migrate at their leisure.”
Paraphrasing of course, but by the third watch I finally started to come around to the idea...our egos can invent problems so easily.
I understand the concept, but I didn’t get to the point yet where I can readily accept these mantras. My inner perfectionist will rage at the sight of such impurities. Anyhow, thanks for the nice discussion and enjoy the holidays ^^!