How Reliably do Empirical Tests Identify Tax Avoidance?
Lisa De Simone, Jordan Nickerson, Jeri Seidman, and Bridget Stomberg; Contemporary Accounting Research, 37(3) 1536-1561
Research on the determinants of tax avoidance have relied on tests using GAAP and cash effective tax rates (ETRs) and total and permanent book‐tax differences. Two new proxies have emerged that overcome documented limitations of these proxies: one, developed by Henry and Sansing (2018), allows for more meaningful interpretation of results estimated in samples that include loss observations. The other, reserves for unrecognized tax benefits (UTB), provides new data on tax uncertainty. We offer empirical evidence on how well tests using these new proxies perform relative to those extensively used in prior research. The paper finds that tests using the proxy developed by Henry and Sansing (2018) have lower power relative to those using other proxies across all samples, including a sample that includes loss observations. In contrast, when firms accrue reserves for uncertain tax avoidance, tests using the current‐year addition to the UTB have the highest power across all proxies, samples, and levels of reserves. In the absence of reserves, tests using the GAAP ETR best detect uncertain tax avoidance, on average. This study contributes to the literature by using a controlled environment to provide the first large‐scale empirical evidence on how the power of tests varies with the use of relatively new proxies, the inclusion of loss observations, and the advent of FIN 48.
Imperfect Quality Certification in Lemons Markets
Birendra K. Mishra, Ashutosh Prasad, and Vijay Mahajan; Theoretical Economics Letters , 10(6) 1260-1275
In markets with information asymmetry, the seller of a high-quality product is unable to credibly communicate its quality to buyers and is forced to price like an average quality seller. This is a disincentive to provide quality and high-quality sellers may exit the market. Of several methods to reduce information asymmetry, we provide an analytical study of certification or grading of quality levels by infomediaries. In the equilibrium of a quality reporting game, we find that certification reduces, but does not eliminate, the problems of information asymmetry. There exists a threshold, determined by the accuracy of the certification process, below which customers should believe quality reports, but disbelieve reports above it. We further examine a two-category scheme of high/low quality certification and discuss the design of certification grades using an entropy approach.
Last-Mile Shared Delivery: A Discrete Sequential Packing Approach
Junyu Cao, Mariana Olvera-Cravioto, and Zuo-Jun Shen; Mathematics of Operations Research, 45(4) 1466-1497
We propose a model for optimizing the last-mile delivery of n packages from a distribution center to their final recipients, using a strategy that combines the use of ride-sharing platforms (e.g., Uber or Lyft) with traditional in-house van delivery systems. The main objective is to compute the optimal reward offered to private drivers for each of the n packages such that the total expected cost of delivering all packages is minimized. Our technical approach is based on the formulation of a discrete sequential packing problem, in which bundles of packages are picked up from the warehouse at random times during the interval [0,T]. Our theoretical results include both exact and asymptotic (as n -> infinity) expressions for the expected number of packages that are picked up by time T. They are closely related to the classical Rényi’s parking/packing problem. Our proposed framework is scalable with the number of packages.
Matching Mobile Applications for Cross-Promotion
Gene Moo Lee, Shu He, Joowon Lee, and Andrew B. Whinston; Information Systems Research, 31(3) 865-891
The mobile app market is one of the most successful software markets. As the platform grows rapidly, with millions of apps and billions of users, search costs are increasing tremendously. The challenge is how app developers can target the right users with their apps and how consumers can find apps that fit their needs. Cross promotion, advertising a mobile app (target app) in another app (source app), is introduced as a new app promotion framework to alleviate the issue of search costs. In this paper, we model source app user behaviors (downloads and post-download usage) with respect to different target apps in cross-promotion campaigns. We construct a novel app similarity measure using LDA topic modeling on apps’ production descriptions, and then analyze how the similarity between the source and target apps influences users’ app download and usage decisions. To estimate the model, we use a unique data set from a large-scale random matching experiment conducted by a major mobile advertising company in Korea. The empirical results show that consumers prefer more diversified apps when they are making download decisions compared with their usage decisions, which is supported by the psychology literature on people’s variety-seeking behavior. Lastly, we propose an app-matching system based on machine learning models (on app download and usage prediction) and generalized deferred acceptance algorithms. The simulation results show that app analytics capability is essential in building accurate prediction models and in increasing ad effectiveness of cross promotion campaigns, and that, at the expense of privacy, individual user data can further improve the matching performance. The paper has implications on the tradeoff between utility and privacy in the growing mobile economy.
Modeling Stochastic Mortality for Joint Lives through Subordinators
Yuxin Zhang and Patrick L. Brockett; Insurance: Mathematics & Economics, 95() 166-172
There is a burgeoning literature on mortality models for joint lives. In this paper, we propose a new model in which we use time-changed Brownian motion with dependent subordinators to describe the mortality of joint lives. We then employ this model to estimate the mortality rate of joint lives in a well-known Canadian insurance data set. Specifically, we first depict an individual’s death time as the stopping time when the value of the hazard rate process first reaches or exceeds an exponential random variable, and then introduce the dependence through dependent subordinators. Compared with existing mortality models, this model better interprets the correlation of death between joint lives, and allows more flexibility in the evolution of the hazard rate process. Empirical results show that this model yields highly accurate estimations of mortality compared to the baseline non-parametric (Dabrowska) estimation.