This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-10
Channels
- # adventofcode (3)
- # aws (2)
- # beginners (85)
- # boot (8)
- # boot-dev (4)
- # cider (36)
- # clara (3)
- # cljs-dev (87)
- # cljsrn (3)
- # clojure (87)
- # clojure-austin (12)
- # clojure-brasil (1)
- # clojure-dev (8)
- # clojure-dusseldorf (5)
- # clojure-estonia (5)
- # clojure-greece (4)
- # clojure-italy (3)
- # clojure-spec (17)
- # clojure-uk (55)
- # clojurescript (70)
- # core-logic (2)
- # cursive (6)
- # data-science (18)
- # datomic (13)
- # emacs (34)
- # fulcro (347)
- # graphql (12)
- # hoplon (6)
- # jobs (3)
- # jobs-discuss (43)
- # juxt (2)
- # keechma (31)
- # leiningen (29)
- # lumo (2)
- # midje (2)
- # off-topic (118)
- # om-next (4)
- # onyx (39)
- # pedestal (6)
- # re-frame (85)
- # reagent (21)
- # remote-jobs (3)
- # ring (5)
- # rum (2)
- # shadow-cljs (126)
- # spacemacs (1)
- # sql (6)
A question regarding not clojure in specific, but still, where would you validate and check the data? In the database or the web application (business logic tier)? Null checks (column)? Uniqueness? Positive number less than 30? Enums? I am very confused in the role the application logic plays, in specific data validations
my rule of thumb is to do appropriate validations at system boundaries - if you drew a big diagram of what the parts of your application are (including frontend, server side, database, services / apis, etc.) any flow of data that goes between two parts should have some sort of validation on the data coming in
In addition (and I meant to answer this much earlier @itaied sorry), I would distinguish between inherent properties of the data itself (non-nil, positive number in a given range) and system context properties (uniqueness of keys) -- and I would validate the inherent properties in code wherever the data originates into the system (computed, inbound arguments, etc), and then weigh the pros and cons of attempting system context checks in code vs the database. For example, if you care about uniqueness, testing for it up front and avoiding a bunch of work will sometimes be worth it, compared to just letting an insert!
fail (and ensuring you can catch the failure and determine it was a specific uniqueness constraint violation).
So "it depends" 🙂
For example, in our dating platform, we require usernames to be globally unique and email addresses to be unique per site. So on the forms where members input those values, we do ajax calls to our API to see if the uniqueness criteria is met and let them know before they even submit the form (and we'll also trap an insert!
failure and communicate back to the member that an error occurred but it may not be as specific). In that situation, the chance of a uniqueness check on the form data passing but the constraint on the DB failing in the short space of time taken to process the form is low enough that the trade off is worth it.
We have other situations where we try an insert!
and if it fails, we fall back to an update!
.