interesting reads

In the news / New articles / Conferences and seminars / Open positions

You can now subscribe to the Environmental Economics Calendar that you find on the right hand side by copy-pasting the calendar.ics link and using it in the program of your choice.

Read More

For all you readers that are questioning yourselves, your economics profession, or a convex combination between the two, I shall give a summary and interpretation of what Lionel Robbins, one of the great godfathers of economic thought, had in mind we (environmental) economists should be doing. And hopefully you find yourself mirrored in these views.

Read More

Merry Christmas and a Happy New Year to everyone out there.

In the news / New articles / Conference and seminar annoucements / Open positions

Read More

  • Fake meat solution: Many people changed to a vegetarian diet simply for moral reasons, or because they understood that there are serious health impacts from regular meat consumption. For example, it is now widely-accepted that eating meat every day is roughly as dangerous for you as smoking a pack of cigarettes every day. Thus, if you are a health-conscious person, or if you rank the welfare of animals sufficiently highly, yet at the same time if you enjoy a tasty burger, then fake meat may be just the alternative that you were searching for!
  • Investors fear the upcoming decommissioning of nuclear plants in Germany:  Estimates for decommissioning of German nuclear power plants ranges from 30-70 billion euros. In contrast, the provisions for dismanteling made by Germany’s nuclear utilities are 39 billion euros, certainly at the lower end of the cost spectrum. Given that costs in the nuclear industry tends to be underestimated by a factor of 1.5 to 2, we can imagine that the final costs will be even higher. As a result, investors are selling their shares from German nuclear utilities. I attach below the share prices for E.ON and RWE in Germany. As you can see, their shares prices have significantly dropped during the past two years.
  • World can be 100% renewable energy at no/little extra costs: Zachary Boren has done a good summary of the latest Greenpeace Energy (R)evolution Scenario. The bottom line is that, according to estimates by Greenpeace, the world can completely phase out BOTH non-renewables AND nuclear by 2050 at no extra costs. Has anyone deeply analyzed the assumptions of Greenpeace? It seems a bit optimistic, I mean the zero extra costs result. But hey – in effect – even at some extra cost should this be an interesting option. I mean, from our integrated assessment climate change models, basically bigger economic growth models with a climate change feedback, we economists generally find that it is optimal to continue to rely on fossil fuels for quite some while at to accept some degree of warming. But it is also clear from these models and subsequent sensitivity analyses that the more additional realistic feedbacks and information we add, like uncertainty, like health costs from fossil fuel use, potential for technical change in the renewable sector, etc, the earlier will we want to switch to renewables and phase out non-renewables. I think while we economists focus too much on the optimal policy given our limited models, we may also want to consider that small economic costs of completely phasing out nuclear and non-renewables may be strongly outweighed by a world without human-induced climate change and fewer worries about nuclear accidents.
  • I added a new calender to the blog (on the right Sidebar). I call it the Environmental Economics calender and will use it to inform interested readers about environmental economics seminars around Paris, but also about environmental economics conferences and workshops. If you have an interesting workshop/conference/seminar that you would like to see advertised/announced please let me know.
  • I also take the opportunity to provide some preliminary information on a workshop that I co-organize on the 6th July 2015 in IPAG, Paris: The changing role of economics and economists in nuclear policy and politics. This workshop will be a side-event to the huge Our Common Future under Climate Change conference that will be held in Paris, 7th-10th of July. We have very interesting speakers so far: Tom Burke (E3G, Imperial College London), Dominique Finon (CNRS), Jan-Horst Keppler (OECD/NEA), Patrick Momal (IRSN), Gordon MacKerron (University of Sussex), Steve Thomas (University of Greenwhich), William Nuttall (Cambridge University), and the BBC journalist Rob Broomby, who is going to chair the panel discussion. More information to follow soon. Registration: Attendance is free but registration is required by the 19th of June 2015. Please follow the link to send an email with the subject line: “Nuclear workshop registration” in order to confirm.
  • The psychology journal Basic and Applied Social Psychology has banned the use of p-values for empirical articles, see HERE. Is this a useful change and will other journals follow? The p-value is basically used in statistical tests to tell you something about how significant your results are. For example, we try to understand the impact of x on y, using y= alpha x+e, where alpha is the coefficient to be determined and e the errors. We would hypothesize that alpha=0 (H0). Then a p-value below 0.001 is interpreted as meaning that the coefficient alpha is highly statistically significantly different from zero. Or, in other words, we cannot reject that 1 out of 1000 times the coefficient alpha is zero.
    Thus, if the p-value we obtain is e.g. lower than 0.001, then we would reject H0. The problem then is that this gives us only an indication that we would expect the coefficient not to be zero, not which other number it would take. If the p-value is above 0.001, then we cannot reject H0 at that significance level. However, we can never accept H0.
    I guess that it is for these reasons that the editors of Basic and Applied Social Psychology decided to ban the use of p-values. In my opinion, a regression result should always be interpreted as the best case that a researcher can make for a hypothesis, or for a model. I think it is quite clear that one can always make a worst case, too. Including or dropping some variables, some other time-series filter, another group of countries, another time period, another treatment of spatial or cross-observation correlation, another regression method is always likely to lead to a different statistical result. I hypothesize that there is no statistical result that is so robust that it holds for any (statistically reasonable) change in the modeling assumptions. For, if there were, then we would not need statistical interference! However, if we do not rely on statistical interference, what is the use of statistics if we cannot study hypotheses and at least know that there exists a best case result for our model?
    Thus, in my opinion, most journals are not going to follow this ban. Also, if one understands the limitations of p-values and how they should be used, then there is nothing really wrong with them and they add a little bit of information to the results.
    Furthermore, and maybe most importantly, I think it should continue to be best practice to not just throw an econometric regression at an audience, but to start by making a convincing case based on a model that captures the most important relationships between the variables in question, and then provide a best case scenario for this model. This best case scenario should be complemented with robustness exercises that show under which conditions this best case continues to hold, but also when it may not hold anymore. This gives some understanding of the robustness and where the model may go wrong, or where the data does not fit.
    Furthermore, if statistical analysis does not support a model, then this should not directly invalidate the model. As I was told once by a friend: “If the data does not fit the model, then too bad for the data!”  Clearly, the question then is where are the problems with the model, or why does the data not support the model. I guess the point that I am trying to make is that we should simply be much more careful in our statistical analysis in general, and not simply run the one or the other regression but clearly think of what we want to know, what model we have in mind, how robust our results are, when they are not robust any longer, what the limitations are and where they come from.
%d bloggers like this: