# Research articles for the 2020-05-10

arXiv

In the context of the Paris Agreement, Chile has pledged to reduce Greenhouse Gases (GHG) intensity by at least 30% below 2007 levels by 2030, and to phase out coal as a energy source by 2040, among other strategies. In pursue of these goals, Chile has implemented a $5 per tonne of CO2 emission tax, first of its kind in Latin America. However, such a low price has proven to be insufficient. In our work, we study an alternative approach for capping and pricing carbon emissions in the Chilean electric sector; the cap and trade paradigm. We model the Chilean electric market (generators and emissions auctioneer) as a two stage capacity expansion equilibrium problem, where we allow future investment and trading of emission permits among generator agents. The model studies generation and future investments in the Chilean electric sector in two regimes of demand: deterministic and stochastic. We show that the current Chilean Greenhouse Gases (GHG) intensity pledge does not drive an important shift in the future Chilean electric matrix. To encourage a shift to greener technologies, a more stringent carbon budget must be considered, resulting in a carbon price approximately ten times higher than the present one. We also show that achieving the emissions reduction goal does not necessarily results in further reductions of carbon generation, or phasing out coal in the longer term. Finally, we demonstrate that under technology change costs reductions, higher demand scenarios will relax the need for stringent carbon budgets to achieve new renewable energy investments and hence meet the Chilean pledges. These results suggest that some aspects of the Chilean pledge require further analysis, of the economic impact, particularly with the recent announcement of achieving carbon neutrality towards 2050.

arXiv

In this paper, we study the asymptotic behaviors of implied volatility of an affine jump-diffusion model. Let log stock price under risk-neutral measure follow an affine jump-diffusion model, we show that an explicit form of moment generating function for log stock price can be obtained by solving a set of ordinary differential equations. A large-time large deviation principle for log stock price is derived by applying the G\"{a}rtner-Ellis theorem. We characterize the asymptotic behaviors of the implied volatility in the large-maturity and large-strike regime using rate function in the large deviation principle. The asymptotics of the Black-Scholes implied volatility for fixed-maturity, large-strike and fixed-maturity, small-strike regimes are also studied. Numerical results are provided to validate the theoretical work.

arXiv

The construction of minimum spanning trees (MSTs) from correlation matrices is an often used method to study relationships in the financial markets. However most of the work on this topic tends to use the Pearson correlation coefficient, which relies on the assumption of normality and can be brittle to the presence of outliers, neither of which is ideal for the study of financial returns. In this paper we study the inference of MSTs from daily US financial returns using Pearson and two rank correlation methods, Spearman and Kendall's $\tau$. We find that the trees constructed using these rank methods tend to be more stable and maintain more edges over the dataset than those constructed using Pearson correlation, that there are significant differences in the agreement of the centrality of various sectors and that despite these, the trees tend to have similar topologies.

arXiv

Differential machine learning (ML) extends supervised learning, with models trained on examples of not only inputs and labels, but also differentials of labels to inputs.

Differential ML is applicable in all situations where high quality first order derivatives wrt training inputs are available. In the context of financial Derivatives risk management, pathwise differentials are efficiently computed with automatic adjoint differentiation (AAD). Differential ML, combined with AAD, provides extremely effective pricing and risk approximations. We can produce fast pricing analytics in models too complex for closed form solutions, extract the risk factors of complex transactions and trading books, and effectively compute risk management metrics like reports across a large number of scenarios, backtesting and simulation of hedge strategies, or capital regulations.

The article focuses on differential deep learning (DL), arguably the strongest application. Standard DL trains neural networks (NN) on punctual examples, whereas differential DL teaches them the shape of the target function, resulting in vastly improved performance, illustrated with a number of numerical examples, both idealized and real world. In the online appendices, we apply differential learning to other ML models, like classic regression or principal component analysis (PCA), with equally remarkable results.

This paper is meant to be read in conjunction with its companion GitHub repo https://github.com/differential-machine-learning, where we posted a TensorFlow implementation, tested on Google Colab, along with examples from the article and additional ones. We also posted appendices covering many practical implementation details not covered in the paper, mathematical proofs, application to ML models besides neural networks and extensions necessary for a reliable implementation in production.

arXiv

The outbreak of the novel coronavirus (COVID-19) has caused unprecedented disruptions to financial and economic markets around the globe, leading to one of the fastest U.S. stock market declines in history. However, in the past we have seen markets recover just as we will see the current markets recover again, so on this basis the recover of the markets will reach a minimum before increasing sometimes in the not-too-distant future. Here we present two forecast models of the S&P500 based on COVID-19 projections of deaths released on 02/04/2020 by the University of Washington and the 2-months consideration since the first confirmed case occured in USA. The decline and recovery in the index is estimated for the following three months. The forecast is a projection of a prediction with uncertainties described by q-gaussian distributions. Our forecast was made on the premise that: (a) The prediction is based on deterministic trend of a data set since the viral spread of COVID-19 started , and (b) The uncertainties are fitted from patterns of the S\&P500 for the last 24 years.

arXiv

It is still common wisdom amongst economists, politicians and lay people that economic growth is a necessity of our social systems, at least to avoid distributional conflicts. This paper challenges such belief moving from a purely physical theoretical perspective. It formally considers the constraints imposed by a finite environment on the prospect of continuous growth, including the dynamics of costs. As costs grow faster than production it is easy to deduce a final unavoidable global collapse. Then, analyzing and discussing the evolution of the unequal share of wealth under the premises of growth and competition, it is shown that the increase of inequalities is a necessary consequence of the premises.

SSRN

When households consume both nondurable goods and housing services, external habit preference over nondurable consumption generates procyclical demand for housing. Marginal utility falls when housing demand rises and innovations to housing demand arise as a risk factor. Motivated by theory, we use shocks to the ratio of residential-to-aggregate investment to capture the housing demand risk. The single-factor model exhibits strong explanatory power for expected returns across various equity characteristic-sorted portfolios and non-equity asset classes with positive risk price estimates that are similar in magnitude. The model is robust to controlling for other factor models based on durable consumption, financial intermediaries, household heterogeneity, and return-based multifactor models designed to price these assets.

arXiv

We study the investor beliefs, sentiment and disagreement, about stock market returns during the COVID-19 pandemic using a large number of messages of investors on a social media investing platform, \textit{StockTwits}. The rich and multimodal features of StockTwits data allow us to explore the evolution of sentiment and disagreement within and across investors, sectors, and even industries. We find that the sentiment (disagreement) has a sharp decrease (increase) across all investors with any investment philosophy, horizon, and experience between February 19, 2020, and March 23, 2020, where a historical market high followed by a record drop. Surprisingly, these measures have a sharp reverse toward the end of March. However, the performance of these measures across various sectors is heterogeneous. Financial and healthcare sectors are the most pessimistic and optimistic divisions, respectively.

arXiv

We propose a novel estimation procedure for scale-by-scale lead-lag relationships of financial assets observed at high-frequency in a non-synchronous manner. The proposed estimation procedure does not require any interpolation processing of original datasets and is applicable to those with highest time resolution available. Consistency of the proposed estimators is shown under the continuous-time framework that has been developed in our previous work Hayashi and Koike (2018). An empirical application to a quote dataset of the NASDAQ-100 assets identifies two types of lead-lag relationships at different time scales.

arXiv

We study an optimization problem for a portfolio with a risk-free, a liquid, and an illiquid risky asset. The illiquid risky asset is sold in an exogenous random moment with a prescribed liquidation time distribution. The investor prefers a negative or a positive exponential utility function. We prove that both cases are connected by a one-to-one analytical substitution and are identical from the economic, analytical, or Lie algebraic points of view.

It is well known that the exponential utility function is connected with the HARA utility function through a limiting procedure if the parameter of the HARA utility function is going to infinity. We show that the optimization problem with the exponential utility function is not connected to the HARA case by the limiting procedure and we obtain essentially different results.

For the main three dimensional PDE with the exponential utility function we obtain the complete set of the nonequivalent Lie group invariant reductions to two dimensional PDEs according to an optimal system of subalgebras of the admitted Lie algebra. We prove that in just one case the invariant reduction is consistent with the boundary condition. This reduction represents a significant simplification of the original problem.

arXiv

We study how to perform tests on samples of pairs of observations and predictions in order to assess whether or not the predictions are prudent. Prudence requires that that the mean of the difference of the observation-prediction pairs can be shown to be significantly negative. For safe conclusions, we suggest testing both unweighted (or equally weighted) and weighted means and explicitly taking into account the randomness of individual pairs. The test methods presented are mainly specified as bootstrap and normal approximation algorithms. The tests are general but can be applied in particular in the area of credit risk, both for regulatory and accounting purposes.

arXiv

Governments issue "stay at home" orders to reduce the spread of contagious diseases, but the magnitude of such orders' effectiveness is uncertain. In the United States these orders were not coordinated at the national level during the coronavirus disease 2019 (COVID-19) pandemic, which creates an opportunity to use spatial and temporal variation to measure the policies' effect with greater accuracy. Here, we combine data on the timing of stay-at-home orders with daily confirmed COVID-19 cases and fatalities at the county level in the United States. We estimate the effect of stay-at-home orders using a difference-in-differences design that accounts for unmeasured local variation in factors like health systems and demographics and for unmeasured temporal variation in factors like national mitigation actions and access to tests. Compared to counties that did not implement stay-at-home orders, the results show that the orders are associated with a 30.2 percent (11.0 to 45.2) reduction in weekly cases after one week, a 40.0 percent (23.4 to 53.0) reduction after two weeks, and a 48.6 percent (31.1 to 61.7) reduction after three weeks. Stay-at-home orders are also associated with a 59.8 percent (18.3 to 80.2) reduction in weekly fatalities after three weeks. These results suggest that stay-at-home orders reduced confirmed cases by 390,000 (170,000 to 680,000) and fatalities by 41,000 (27,000 to 59,000) within the first three weeks in localities where they were implemented.