Fork me on GitHub
#clojure-uk
<
2019-06-12
>
thomas07:06:10

mogge šŸ˜¼

thomas08:06:48

Clojure problem... (well... a little bit...)

thomas08:06:33

I use clj-time and trying to parse sometime stamps with a custom formatter: (def custom-formatter (f/formatter "dd/MM/yyyy HH:mm"))

thomas08:06:36

this one parses fine : 18/03/2019 19:11 yet this one 12/03/2019 09:56 gives me an exception: Invalid format: "12/03/2019 09:56"

thomas08:06:58

what could possibly go wrong?

thomas08:06:04

same function is called.

Olical08:06:40

Try 9 instead of 09?

thomas08:06:58

Invalid format: "12/03/2019 9:56" šŸ˜ž

danielneal08:06:20

Try (f/formatter "dd/MM/yyyy' 'HH:mm")

danielneal08:06:42

although wouldn't explain why one parses and the other doesn't

thomas08:06:23

nope.. that doesn't do the trick... but let me look at the link.

thomas09:06:37

found part of the problem... the string that works in 16 char long.. the one that doesn't work is 17 char long

thomas09:06:01

but it doesn't print out.

dharrigan09:06:53

I think it works for me...

dharrigan09:06:03

def foo (f/formatter "dd/MM/yyyy HH:mm"))

dharrigan09:06:09

(f/parse foo "12/03/2019 09:56")

dharrigan09:06:16

#object[org.joda.time.DateTime 0x140fa482 "2019-03-12T09:56:00.000Z"]

dharrigan09:06:37

(f/parse foo "18/03/2019 19:11")

dharrigan09:06:41

#object[org.joda.time.DateTime 0x1fd73dcb "2019-03-18T19:11:00.000Z"]

Wes Hall09:06:55

@thomas Probably nothing, but something that I always look for with this stuff... the second of your two examples would be a valid US date, whereas the first one would not. I wonder whether you are hitting some weird bug to do with locale inference.

thomas09:06:41

@wesley.hall I thought about that, but I would have thought that the custom formatter I use should take care of that...

Wes Hall09:06:58

Yeah me too, it's just the only real difference I can ascertain.

thomas09:06:26

I'll be concentrating on the hidden character for the moment... it is rather strange that the two string don't have the same length... but when printed they do.

Wes Hall09:06:34

Oh, yeah, didn't see that message before I sent mine. If there is a hidden character in there, that sounds like a winner.

thomas09:06:31

(clojure.string/replace date #"\p{C}" "") solves the problem.

Ben Hammond09:06:54

can you call (seq (.getBytes dodgy-string)) on it?

Ben Hammond09:06:22

what is the dodgy character, and where did it come from?

Ben Hammond09:06:36

and who sent it?

jasonbell09:06:25

ā€œHowToDoInJavaā€ does sound like a childrenā€™s television channel.

thomas09:06:01

it is an .csv file I just create from Excel... I have done this before and hadn't had the problem... so the problem might be already in the Excel file.

Ben Hammond09:06:02

a low-budget Rough Guide to Indonesia

šŸ‘ 4
thomas09:06:26

@jasonbell in that case it is the right website for me.

šŸ™‚ 4
Wes Hall09:06:42

The hacker in me is now wondering how many popular websites deal well with strings containing unprintable characters šŸ™‚

Wes Hall10:06:25

Have some \0007's in your database friend...

Ben Hammond10:06:02

I do miss the days when Ctrl-G made a beep

thomas10:06:42

as a workaround we had to introduce a non-printable character somewhere once on a web page... just using a space didn't do the trick, so I ended up looking at the unicode list of things that look like a space but aren't actually a space.

thomas10:06:19

and what I forgot to say of course... thank you for all your help @olical @danieleneal @dharrigan @wesley.hall @ben.hammond and a special mention for @jasonbell of course šŸ˜‰

šŸ‘ 4
jasonbell10:06:45

@thomas honestly you donā€™t need to mention my name, I was no help whatsoever.

dharrigan10:06:47

I have a standing desk at work now

dharrigan10:06:56

Its....different

dharrigan10:06:15

it's a motorised one, that goes all the way up and all the way down. What fun!

alexlynham10:06:08

Iā€™ve been reading the spj and wadler imperative fp paper the last few days on coffee breaks (so about a line at a time based on how thick I am, haha) and hooooly hell. So many things so succinctly expressed in there

alexlynham10:06:49

the bit on continuations w.r.t. how javascript does all its async stuff was like, ā€˜yep, okay, smart people already sussed this out, ofcā€™

thomas10:06:51

@jasonbell your mere presence in this universe made all the difference!

dharrigan11:06:56

Alex, can you provide a link?

yogidevbear11:06:45

This weather can really go fly a kite now šŸ˜©

thomas11:06:13

here it is only :rain_cloud:

šŸ’Æ 4
yogidevbear11:06:38

Oh, do you mean all the time all year round?

yogidevbear11:06:23

The joys of a paywall šŸ˜ž

yogidevbear11:06:47

TFW you realise there was a Haskell mailing list back in 1988!

djtango11:06:33

@thomas nasty bug you found there

djtango11:06:24

I came across similar a issue with casting strings to ts_query in postgres

thomas11:06:43

@djtango thank you. it took a bit of time to figure out what the problem was... as you couldn't 'see' it

djtango11:06:47

interestingly, spec was great at generating illegal strings - managed to flush out a whole bunch of unusual edge cases

djtango11:06:20

>@djtango thank you. it took a bit of time to figure out what the problem was... as you couldn't 'see' it So cruel

djtango11:06:12

Excel šŸ™‚ ā¤ļø

Wes Hall12:06:47

How about a controversial topic for a Wednesday afternoon...? Unit test suites and TDD. I am finding myself rapidly developing the position that these techniques serve mostly as a training aid to teach junior and mid-level developers how not to write shitty code. I come across an awful lot of code that I look at and think, "you wouldn't have been able to create this utter mess if you had forced yourself to write tests...". However, I am becoming less convinced that 1) this is the only way to achieve this goal (as opposed to say, laying down a "gold standard" style in the initial stages of the project and loudly and publicaly shaming deviants from it), and 2) that the effort, time and investment involved in the development of a truly comprehensive testing suite would not be better spent in building the features that push the project forward... Discuss?

Wes Hall12:06:54

I'd just sneakily add that the much more rapid feedback cycles (/me kisses his REPL lovingly) available in more modern development introduce an important variable to consider.

mccraigmccraig12:06:00

@wesley.hall disagree. i don't care whether you write your tests first or afterwards, but i've found a unit test suite is invaluable for supporting refactorings in any non-trivial codebase

Wes Hall12:06:13

@mccraigmccraig Sure, I mean I am not saying they are useless. If you have them, they are valuable. The kinds of things I wonder about is whether the time and effort taken to create the test suite has a positive ROI. For example, if you could estimate the time it would have taken you to perform those refactorings without the test suite, subtract the time it actually took with them, would this be greater or less than the time it took to create the suite in the first place?

Wes Hall12:06:59

I actually quite like writing tests, so I tend to do it anyway, but these are the questions that keep me up at night šŸ™‚

mccraigmccraig12:06:18

i don't think it's just about the time taken, although that's obvs a factor - but without the test suite then we would be putting buggier software into customer's hands, leading to avoidable release-cycles and a fear of refactoring - and once you have a fear of refactoring you are in deep trouble

mccraigmccraig12:06:28

if i was writing in haskell though, the question would be harder to answer

thomas12:06:59

@mccraigmccraig would you call your tests unit test? or integration tests? or maybe even system test/end-to-end tests?

thomas12:06:47

because it sounds like the tests you are referring to aren't quite unit tests?

mccraigmccraig12:06:42

i have no idea @thomas - some are definitely unit-tests, but many use a db instance so are perhaps a kind of integration test, but the line seems blurry and we haven't really made any effort to distinguish

Wes Hall12:06:50

I definitely agree with the last part. The interesting thing there though, is that I have this slightly niggling feelings that by the time you get good enough at writing uniting testing to really engender that "courage" attribute, you are good enough to write inherently refactorable code without them. I've definitely seen systems that had a test suite but where people were still frightened of refactoring because either the tests were not comprehensive enough, or the tests were so coupled to the design that changing the design meant having to do a lot of work on the tests as well... which defeats the purpose and actually, if anything, increases the fear of refactoring because now you have to refactor two things šŸ™‚

Wes Hall12:06:07

Last part = fear of refactoring, sorry chat moved a bit fast there.

mccraigmccraig12:06:52

i guess that whenever a good practice arises there will always be the possibility of implementing the letter rather than the spirit, and shooting yourself in the foot in the process

Wes Hall12:06:01

I suppose what I am saying is that, "Learning to write good tests" and "Learning to write good code" are the same exercise šŸ™‚

mccraigmccraig12:06:15

now that i wholeheartedly agree with šŸ™‚

Wes Hall12:06:05

I once worked on a project for a very well known, major high street bank (probably best not to identify any further), that had a strict, "test coverage" metric. The result of this was a whole bunch of tests written by developers that actually tested nothing at all... but were just there to exercise the code and hit the coverage metric.... that was lots of fun!

Wes Hall12:06:51

Another interesting question on this subject would be, "Is there a project configuration, particularly in terms of the skills and experience of the team involved, where even the most diehard advocate of the unit testing approach would be wise to forego them?"

Wes Hall12:06:03

@mccraigmccraig Also, Haskell comment was interesting. I suppose you mean that the presence of a reliable type checker (as well as some of the other "strictness" of the Haskell language / compiler) has an important effect. I would also definitely agree with this.

mccraigmccraig12:06:20

@wesley.hall yes - lots of the errors that refactorings commonly introduce - bad names, bad shape assumptions, unhandled cases - can't happen with well considered static checks, leaving your tests to handle semantics which is really just what you want

Wes Hall12:06:21

The effort to use the clojure/spec stuff to automate some of this testing load is also interesting, but honestly I have never really figured out a way to use this without ending up in a situation where I simply reimplement the function itself in the :fn part of the fdef. Even the documentation for this says, "A spec of the relationship between args and ret"... the only problem there is that the best spec of the relationship between the arguments and the return value is the function body šŸ™‚

alexlynham14:06:34

> Iā€™ve definitely seen systems that had a test suite but where people were still frightened of refactoring because either the tests were not comprehensive enough, or the tests were so coupled to the design that changing the design meant having to do a lot of work on the tests as well testing does nothing if the programmers donā€™t know how to design things

alexlynham14:06:56

decoupled things scale, large things made of small things scale, and since scaling is usually a proxy for change, and refactoring is a proxy for progressive change it drives one towards the assumption that a lot of ineffective testing is polishing a turd

alexlynham14:06:50

itā€™s almost like designing for change in a pragmatic way is the way forward

šŸ‘ 4
alexlynham14:06:37

tests as feedback are useful in langs that donā€™t have a REPL - large JS codebases you can run 2000 tests in 30s over spring to mind - but the only case I can think of for no tests is a serverless system you can attach a REPL to, because the parts will be so small that you can check them at the repl, or in a real-life staging env, because sls deploy is as easy as running a test suite

alexlynham14:06:09

but even then I would write tests, because I donā€™t think I ever regret writing them, but I often regret not writing more šŸ™‚

Wes Hall14:06:03

@alex.lynham I'm pretty sure I am onboard with all of that. I do think it is important not to assign a zero cost to the practice. Generally speaking, I find that the effort involved in writing the tests for a feature can vastly outweigh the effort involved in implementing the feature. At the extreme end, the absolute commitment to a fully comprehensive set of unit tests for a given project could well increase the cost of initial development by an order of magnitude, though perhaps 2-5x is more common. For a while, I was in danger of falling into a bit of niche (one that I thankfully seem to have escaped), of being the, "lead the rewrite" guy. These were very painful projects, that involved essentially rebuilding successful software products whose codebase had degraded to the point that they had effectively hardened into stone. I do remember thinking at the time, "if these guys had begun with a commitment to quality code and quality tests they wouldn't be in this god damn mess", but as the years have gone by and as I have found myself more and more involved in the strategic side of product development I have added the additional thought, "....but if they had begun with that commitment, would they have emerged as the winner in their space, or would they have been beaten by somebody less fastidious?". I think these are interesting questions. Building comprehensive tests is essentially an investment in the future. At least in the start-up space, I wonder how many promising projects and teams have lost the fight because they spent too much time on all this. Those people, if confronted with an objective view of their ultimate failure, probably would regret writing them.

Wes Hall14:06:09

That said, I have seen plenty of cases where the codebase experienced this hardening long before the company / team had achieved the kind of success that would enable them to invest in a modernisation... so it's not a simple calculation obviously.

thomas14:06:38

I remember asking one of the U-switch people about testing (they claimed to be able to deploy in a matter of minutes) and they (allegedly, maybe someone can tell us more about this) do very little testing and rely on metric in production to see of things go wrong.

Wes Hall14:06:01

@thomas Indeed, I think cultural tolerance for bugs in production is yet another metric for the mix. The aforementioned bank had multiple-month gaps between a code-freeze and production deployment in order to do some pretty hardcore manual testing to go along with their (ridiculously ineffective) automated test suite. Obviously, if you are a bank, tolerance for production bugs is essentially zero. For other companies though, it's much higher. A high tolerance for production bugs coupled with an effective CD flow might also mitigate the need for extreme commitments to automated testing, though I am too old-school and too paranoid to not want my CD pipeline to some basic QA at least.

thomas14:06:18

very true... different environments have different requirements. Banking being a very good example of having next to no errors. At IBM we used to deal with banks and they were notoriously conservative when it came to rolling out patches and upgrades to their systems and doing lots of testing before doing so.

alexlynham15:06:38

the testing in production/observability first thing is interestingā€¦ I mean, the majority of bad bugs Iā€™ve seen in the wild werenā€™t caught by the test suite after allā€¦ but I think thatā€™s probably more cultural as wes suggests. CI/CD plus a cultural tolerance for deploying frequently and fixing forwards is the ideal combo

jasonbell16:06:44

Thereā€™s tests, tests, production tests and, ā€œI wish Iā€™d thought of that earlierā€ tests.

simple_smile 8
šŸ’Æ 4
3Jane18:06:25

I once got a lightbulb moment when someone said ā€œtest code is ALSO code, and requires code maintenanceā€

rickmoynihan18:06:43

ā¬†ļø this. Tests are great if theyā€™re adding value. You should have as few as possible, and as many as necessary. Just like with code.

3Jane18:06:45

Itā€™s less that good tests lead to learning how to write code, and more that they reflect skills of decoupling (analysis) and problem specification

3Jane18:06:51

You cannot write good tests if you donā€™t understand what the crux of the problem is. Same for documentation (in code). One well placed comment is worth more than a whole file of boilerplate docs

3Jane18:06:41

Also +1 to @jasonbell test hierarchy :D

rickmoynihan19:06:36

I agree entirely with all of that @lady3janepl ā€¦ itā€™s the main problem with TDD, which is no substitute for design at all. Itā€™s just a way of encouraging particular styles of codingā€¦ using interfaces, DI, etcā€¦ I think itā€™s also worth mentioning that you canā€™t judge test quality on the test code alone. Itā€™s quality is inherently coupled to the code in question, so theyā€™re in a kind of ā˜Æļø relationship. And that coupling largely determines how effective they are at finding problems. A test that never fails adds little value; unless itā€™s documenting or demonstrating an important condition / propery. Likewise a frequently failing test may be really useful, or useless, if itā€™s failing for the wrong reasons.

seancorfield20:06:45

Fascinating discussion (a bit late catching up!). The two places where I always do TDD are 1) developing a new (REST) API endpoint and 2) bug fixing (for a certain class of bugs). For a new API endpoint, I'll write tests to match the specification in terms of what input should produce error responses and a couple of happy path tests (generatively, if possible). Then I'll build the API endpoint, working until all the tests pass (and I usually figure out a couple more tests as I'm writing the API code). That way I have a) good confidence that the API performs per spec and b) I now have a "live spec" in terms of tests that anyone can look at to understand the docs (or to check what the current behavior should be if the docs get out of date... as they often do).

seancorfield20:06:34

For certain classes of bugs, where it's clear how to write a test for the correct behavior that will fail because of the bug, I write the test code first, verify it fails -- to double-check my understanding of the buggy behavior -- and then attempt to fix the bug (so the new test passes).

seancorfield20:06:15

But I've found (over time) that there are a lot of things that really aren't worth writing tests for -- but it's hard to articulate beyond a gut sense that something isn't worth trying to bake into stone a test case, because you feel the code will change too much and the tests just become "busy" maintenance work šŸ˜

seancorfield20:06:49

The more I work with Clojure, the more I tend to put "test expressions" in (comment ..) forms in the source code as a way to perform quick "sanity checks". Some of these might become tests but most don't and they serve as usage/behavior hints for anyone that touches the code later.

seancorfield20:06:20

I don't believe that the presence/absence of tests has anything to do with "good code" (in terms of "high quality" code) -- people who write bad code can write bad tests too or omit tests and people who write good code can do the same šŸ™‚

Wes Hall21:06:18

@rickmoynihan >A test that never fails adds little value I remember a fun little module in my Java days which I believe was called "Jester". It would mutate the main codebase and rerun the tests expecting them to fail. So it might change a constant, or invert a predicate etc etc. I remember thinking what a fantastic idea that was. Coverage in the sense of lines of code exercised is a utterly useless metric, but if a tool like that cannot find a way to change your code without causing your tests to fail, this always struck me as the definitive measure of true coverage.

rickmoynihan08:06:21

Yeah mutation testing is a really nice idea. Never actually used it, though.

seancorfield21:06:55

That reminds me of an experimental "super optimizer" I saw at a small UK compiler company back in the 90's... It randomly mutated a series of instructions and ran them in a simulator to see if it could find a faster series of opcodes that produced the exact same state (memory, registers) as the original from random input states.

Wes Hall21:06:00

@seancorfield "Machine learning" would definitely feature prominently on that pitch deck had this been invented in 2019 šŸ™‚

šŸ˜„ 8
Wes Hall21:06:10

By the way, absolutely agree with you on: > people who write bad code can write bad tests too ...though I have found that some TDD exercises are a fairly good way to shake the penny to drop for relatively junior developers. That magical moment when the mission changes from, "make something that works", to "make something that can be understood and maintained". I think that the practical value of various degrees of unit testing can be debated and probably will for years to come, but as a training technique it can be hard to beat.

seancorfield22:06:10

I either missed that comment earlier or it didn't register. An interesting hypothesis. I haven't done much training/mentoring around TDD so I haven't seen that but you could well be right.