Op 25 augustus 2018 kwam het boek “Waar een klein land groot in kan zijn” uit van Pieter Tops, Judith van Valkenhoef, Edward van der Torre en Luuk van Spijk. In dit boek berekenden de auteurs dat er jaarlijks in Nederland 971 566 879 XTC pillen worden geproduceerd, waarvan 80% wordt geëxporteerd. Er zijn veel reacties op dit boek gekomen, waar de auteurs op hebben gereageerd door met een ander exportpercentage de totale omzet te herberekenen. In deze blog post reageer ik hierop door uit te rekenen hoeveel XTC pillen de wereld elk jaar consumeert, waaruit volgt dat niet het exportpercentage niet klopt, maar de geschatte totale XTC productie.
On the 25th August 2018, the book “The Netherlands and synthetic drugs” by Pieter Tops, Judith van Valkenhoef, Edward van der Torre and Luuk van Spijk was published. In this book, the authors calculated that in the Netherlands, annually 971 566 879 XTC pills are produced, 80% of which is exported. This book prompted many reactions, and the authors responded to these again by recalculating the export percentage. In this blog post I respond by calculating how much XTC the world uses every year, a number that implies that the export percentage was not what was incorrect, but the total production estimate.
Recently, a report came out (“Waar een klein land groot in kan zijn”) by Pieter Tops, Judith van Valkenhoef, Edward van der Torre and Luuk van Spijk (all employed by the Dutch Police Academy) drawing two sensational conclusions. First, that every year, the Netherlands produces € 18 916 882 439 (18,9 billion) worth of synthetic drugs. Second, that in the Netherlands, every year 194 313 376 ecstasy pills are used. This latter conclusions is clearly wrong, which sheds doubt on the veracity of the former. Both the original report and the abbreviated version are written in a sensationalist tone; might the authors have neglected to verify their impressive conclusions with sufficient rigor?
Recent kwam het rapport “Waar een klein land groot in kan zijn” uit van Pieter Tops, Judith van Valkenhoef, Edward van der Torre en Luuk van Spijk (allemaal werkzaam bij de Politieacademie), waarin twee sensationele uitspraken worden gedaan. Ten eerste wordt gesteld dat er in Nederland per jaar voor 18 916 882 439 (18,9 miljard) aan synthetische drugs wordt geproduceerd. Ten tweede wordt gesteld dat er in Nederland per jaar 194 313 376 pillen worden gebruikt. Deze tweede stelling is duidelijk fout, wat ook gerede twijfel oproept over de waarheid van de eerste stelling. De toon van zowel het rapport als de verkorte notitie die ermee gepaard ging verraadt sensationalisme; wellicht hebben de auteurs hun opzienbarende conclusies onvoldoende grondig gecontroleerd?
In health psychology, there exists a lack of conceptual clarity regarding a number of terms that are at the core of psychological science. True, this problem exists in psychology in general, but the terms Behavior Change Technique (from the BCT taxonomy approach) and Method for Behavior Change (from the Intervention Mapping approach) have exacerbated matters within behavior change science. In this post, I will discuss this in more detail, based on a recent Twitter discussion that erupted around whether a psychological variable targeted by a behavior change technique is a mediator or not:
Where by 'mediators', you mean 'determinants' I guess? I don't think 'mediator' is the right term – there is no predictor variable. A manipulation is an operationalisation of a construct – if the manipulation is valid, the construct changes, *that* is the variable, the predictor.
— Gjalt-Jorn Peters (@matherion) June 7, 2018
In this post, I will explain more in detail what I mean (you may want to read the Twitter thread first though).
This is a draft as a contribution to a discussion to a response to a discussion in the Facebook Page Psychological Methods Discussion Group.
The reason regression analyses aren’t a useful tool to determine the relative relevance of each behavioral determinant has three components.
[ Note: this is a first draft, a preprint of a blog post so to speak 🙂 ]
A recent 72-author preprint proposed to recalibrate when we award the qualitative label ‘significant’ in research in psychology (and other fields) such that more evidence is required before that label is used. In other words, the paper proposes that researchers have to be a bit more certain of their case before proclaiming that they have found a new effect.
The paper met with resistance, and although any proposal for change usually is, what’s interesting is that in this case, the resistance came in part from researchers involved in Open Science (the umbrella term for the movement to mature science through openness, collaboration and accountability). Since these researchers often fight for improved research practices ‘at all costs’ this resistance seems odd.
Thus ensued the Alpha Wars.
[Image by Silver Blue, https://flickr.com/photos/cblue98/]
[These are some thoughts that I’ll eventually work into a paper, so it may be a bit rough/drafty]
Psychology is characterized by an interesting paradox. On the one hand, it’s a very popular topic. After all, everybody’s a person, and the most important influences in most people’s worlds are other people. Who doesn’t love learning about oneself, one’s loved ones, one’s boss, and the leaders of one’s country? People are endlessly complex, so psychology and psychological research provide a veritable fount of knowledge.
On the other hand, that complexity of the human psychology is tenaciously denied. It is almost as if that complexity is seen rather like a spiritual entity, safe to invoke whenever it’s convenient to stare in wonder at the awesome quirks of nature and never-ending weirdness of people, but blissfully disregarded whenever it it threatening or gets in the way of day-to-day activities.
[ UPDATE: a commentary based on this blog post has now been published in the Journal of Informetrics at http://www.sciencedirect.com/science/article/pii/S1751157717302365 ]
Recently a preprint was posted at ArXiv to explore the question “Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers?“. The abstract concludes as follows:
The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation impact above or below average in a field and publication year – not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered.
In this post, I aim to explain why this is wrong (and more, how following this recommendation may retard scientific progress) and I have a go at establishing a common sense framework for researcher selection that might work.