美文网首页OpenPose我爱编程
今日学术视野(2018.4.6)

今日学术视野(2018.4.6)

作者: ZQtGe6 | 来源:发表于2018-04-06 05:59 被阅读160次

    cs.AI - 人工智能
    cs.CL - 计算与语言
    cs.CR - 加密与安全
    cs.CV - 机器视觉与模式识别
    cs.CY - 计算与社会
    cs.DB - 数据库
    cs.DC - 分布式、并行与集群计算
    cs.HC - 人机接口
    cs.IT - 信息论
    cs.LG - 自动学习
    cs.NE - 神经与进化计算
    cs.SI - 社交网络与信息网络
    cs.SY - 系统与控制
    econ.EM - 计量经济学
    eess.IV - 图像与视频处理
    eess.SP - 信号处理
    math.CO - 组合数学
    math.OC - 优化与控制
    math.PR - 概率
    math.ST - 统计理论
    nlin.AO - 适应和自组织系统
    physics.soc-ph - 物理学与社会
    q-bio.QM - 定量方法
    stat.AP - 应用统计
    stat.ME - 统计方法论
    stat.ML - (统计)机器学习

    • [cs.AI]Abstractive Tabular Dataset Summarization via Knowledge BaseSemantic Embeddings
    • [cs.AI]Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks
    • [cs.AI]Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
    • [cs.AI]Probing Physics Knowledge Using Tools from Developmental Psychology
    • [cs.AI]R2RML Mappings in OBDA Systems: Enabling Comparison among OBDA Tools
    • [cs.AI]The Logical Essentials of Bayesian Reasoning
    • [cs.AI]The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic
    • [cs.CL]Clinical Concept Embeddings Learned from Massive Sources of Medical Data
    • [cs.CL]Marian: Fast Neural Machine Translation in C++
    • [cs.CL]Socioeconomic Dependencies of Linguistic Patterns in Twitter: A Multivariate Analysis
    • [cs.CR]Cost-Benefit Analysis of Moving-Target Defense in Power Grids
    • [cs.CV]A Modified Image Comparison Algorithm Using Histogram Features
    • [cs.CV]A Multi-Stage Multi-Task Neural Network for Aerial Scene Interpretation and Geolocalization
    • [cs.CV]A Segmentation-aware Deep Fusion Network for Compressed Sensing MRI
    • [cs.CV]A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation
    • [cs.CV]Btrfly Net: Vertebrae Labelling with Energy-based Adversarial Learning of Local Spine Prior
    • [cs.CV]Building Efficient CNN Architecture for Offline Handwritten Chinese Character Recognition
    • [cs.CV]Crystal Loss and Quality Pooling for Unconstrained Face Verification and Recognition
    • [cs.CV]Density Adaptive Point Set Registration
    • [cs.CV]Discriminative Cross-View Binary Representation Learning
    • [cs.CV]Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars
    • [cs.CV]Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset
    • [cs.CV]Gaussian Process Uncertainty in Age Estimation as a Measure of Brain Abnormality
    • [cs.CV]Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input
    • [cs.CV]Learning Discriminative Features with Multiple Granularities for Person Re-Identification
    • [cs.CV]Normalized Cut Loss for Weakly-supervised CNN Segmentation
    • [cs.CV]Patch-based Face Recognition using a Hierarchical Multi-label Matcher
    • [cs.CV]Prediction and Localization of Student Engagement in the Wild
    • [cs.CV]Representing Videos based on Scene Layouts for Recognizing Agent-in-Place Actions
    • [cs.CV]Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval
    • [cs.CV]SketchMate: Deep Hashing for Million-Scale Human Sketch Retrieval
    • [cs.CV]Stochastic Adversarial Video Prediction
    • [cs.CV]Synthesizing Programs for Images using Reinforced Adversarial Learning
    • [cs.CV]Towards Deep Learning based Hand Keypoints Detection for Rapid Sequential Movements from RGB Images
    • [cs.CV]Unsupervised Geometry-Aware Representation for 3D Human Pose Estimation
    • [cs.CV]Unsupervised Semantic-based Aggregation of Deep Convolutional Features
    • [cs.CV]Visual Object Categorization Based on Hierarchical Shape Motifs Learned From Noisy Point Cloud Decompositions
    • [cs.CY]Online Abuse of UK MPs in 2015 and 2017: Perpetrators, Targets, and Topics
    • [cs.DB]NegPSpan: efficient extraction of negative sequential patterns with embedding constraints
    • [cs.DC]A Deterministic Distributed $2$-Approximation for Weighted Vertex Cover in $O(\log n\logΔ/ \log^2\logΔ)$ Rounds
    • [cs.DC]Designing a Micro-Benchmark Suite to Evaluate gRPC for TensorFlow: Early Experiences
    • [cs.DC]Maintenance of Strongly Connected Component in Shared-memory Graph
    • [cs.DC]Optimal Rendezvous ${\mathcal L}$-Algorithms for Asynchronous Mobile Robots with External-Lights
    • [cs.DC]Personal Volunteer Computing
    • [cs.HC]Vanlearning: A Machine Learning SaaS Application for People Without Programming Backgrounds
    • [cs.IT]Controllable Identifier Measurements for Private Authentication with Secret Keys
    • [cs.IT]Nonexistence of generalized bent functions and the quadratic norm form equations
    • [cs.IT]The Capacity of Anonymous Communications
    • [cs.LG]Feature selection in weakly coherent matrices
    • [cs.LG]Hospital Readmission Prediction - Applying Hierarchical Sparsity Norms for Interpretable Models
    • [cs.LG]Information Maximizing Exploration with a Latent Dynamics Model
    • [cs.LG]Online Multi-Label Classification: A Label Compression Method
    • [cs.LG]Renewal Monte Carlo: Renewal theory based reinforcement learning
    • [cs.LG]Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
    • [cs.LG]Universal Planning Networks
    • [cs.NE]When Hypermutations and Ageing Enable Artificial Immune Systems to Outperform Evolutionary Algorithms
    • [cs.SI]Information Propagation Analysis of Social Network Using the Universality of Random Matrix
    • [cs.SI]Real-time Detection of Content Polluters in Partially Observable Twitter Networks
    • [cs.SI]WhatsApp, Doc? A First Look at WhatsApp Public Group Data
    • [cs.SY]Real-Time Prediction of the Duration of Distribution System Outages
    • [econ.EM]Should We Condition on the Test for Pre-trends in Difference-in-Difference Designs?
    • [eess.IV]Looking at Hands in Autonomous Vehicles: A ConvNet Approach using Part Affinity Fields
    • [eess.SP]Estimation of Channel Parameters in a Multipath Environment via Optimizing Highly Oscillatory Error-Functions Using a Genetic Algorithm
    • [math.CO]Pure gaps on curves with many rational places
    • [math.OC]Sparse non-negative super-resolution - simplified and stabilised
    • [math.PR]A Fixed Point Theorem for Iterative Random Contraction Operators over Banach Spaces
    • [math.ST]Near-Optimality Recovery of Linear and N-Convex Functions on Unions of Convex Sets
    • [math.ST]On inference validity of weighted U-statistics under data heterogeneity
    • [math.ST]On the Existence of Uniformly Most Powerful Bayesian Tests With Application to Non-Central Chi-Squared Tests
    • [math.ST]The noise barrier and the large signal bias of the Lasso and other convex estimators
    • [nlin.AO]Self-Organization and Artificial Life: A Review
    • [physics.soc-ph]Spatial diffusion and churn of social media
    • [q-bio.QM]An integration of fast alignment and maximum-likelihood methods for electron subtomogram averaging and classification
    • [stat.AP]A Mixture Model to Detect Edges in Sparse Co-expression Graphs
    • [stat.AP]A latent variable model to measure exposure diversification in the Austrian interbank market
    • [stat.AP]On approximate least squares estimators of parameters on one-dimensional chirp signal
    • [stat.AP]Predictive and Prescriptive Analytics for Location Selection of Add-on Retail Products
    • [stat.ME]Beta regression control chart for monitoring fractions and proportions
    • [stat.ME]Model assessment for time series dynamics using copula spectral densities: a graphical tool
    • [stat.ME]Robust Discrimination between Long-Range Dependence and a Change in Mean
    • [stat.ME]Shape-Constrained Univariate Density Estimation
    • [stat.ME]Variable selection using pseudo-variables
    • [stat.ML]An Imprecise Probabilistic Estimator for the Transition Rate Matrix of a Continuous-Time Markov Chain
    • [stat.ML]Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-iid Data

    ·····································

    • [cs.AI]Abstractive Tabular Dataset Summarization via Knowledge BaseSemantic Embeddings
    Paul Azunre, Craig Corcoran, David Sullivan, Garrett Honke, Rebecca Ruppel, Sandeep Verma, Jonathon Morgan
    http://arxiv.org/abs/1804.01503v1

    This paper describes an abstractive summarization method for tabular data which employs a knowledge base semantic embedding to generate the summary. Assuming the dataset contains descriptive text in headers, columns and/or some augmenting metadata, the system employs the embedding to recommend a subject/type for each text segment. Recommendations are aggregated into a small collection of super types considered to be descriptive of the dataset by exploiting the hierarchy of types in a pre-specified ontology. Using February 2015 Wikipedia as the knowledge base, and a corresponding DBpedia ontology as types, we present experimental results on open data taken from several sources--OpenML, CKAN and data.world--to illustrate the effectiveness of the approach.

    • [cs.AI]Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks
    Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Vadim Sheinin
    http://arxiv.org/abs/1804.00823v2

    Celebrated \emph{Sequence to Sequence learning (Seq2Seq)} and its fruitful variants are powerful models to achieve excellent performance on the tasks that map sequences to sequences. However, these are many machine learning tasks with inputs naturally represented in a form of graphs, which imposes significant challenges to existing Seq2Seq models for lossless conversion from its graph form to the sequence. In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task, and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.

    • [cs.AI]Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
    Ashwin J. Vijayakumar, Abhishek Mohta, Oleksandr Polozov, Dhruv Batra, Prateek Jain, Sumit Gulwani
    http://arxiv.org/abs/1804.01186v1

    Synthesizing user-intended programs from a small number of input-output examples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand-engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12x speed-up compared to state-of-the-art systems.

    • [cs.AI]Probing Physics Knowledge Using Tools from Developmental Psychology
    Luis Piloto, Ari Weinstein, Dhruva TB, Arun Ahuja, Mehdi Mirza, Greg Wayne, David Amos, Chia-chun Hung, Matt Botvinick
    http://arxiv.org/abs/1804.01128v1

    In order to build agents with a rich understanding of their environment, one key objective is to endow them with a grasp of intuitive physics; an ability to reason about three-dimensional objects, their dynamic interactions, and responses to forces. While some work on this problem has taken the approach of building in components such as ready-made physics engines, other research aims to extract general physical concepts directly from sensory data. In the latter case, one challenge that arises is evaluating the learning system. Research on intuitive physics knowledge in children has long employed a violation of expectations (VOE) method to assess children's mastery of specific physical concepts. We take the novel step of applying this method to artificial learning systems. In addition to introducing the VOE technique, we describe a set of probe datasets inspired by classic test stimuli from developmental psychology. We test a baseline deep learning system on this battery, as well as on a physics learning dataset ("IntPhys") recently posed by another research group. Our results show how the VOE technique may provide a useful tool for tracking physics knowledge in future research.

    • [cs.AI]R2RML Mappings in OBDA Systems: Enabling Comparison among OBDA Tools
    Manuel Namici
    http://arxiv.org/abs/1804.01405v1

    In today's large enterprises there is a significant increasing trend in the amount of data that has to be stored and processed. To complicate this scenario the complexity of organizing and managing a large collection of data, structured according to a single, unified schema, makes so that there is almost never a single place where to look to satisfy an information need. The Ontology-Based Data Access (OBDA) paradigm aims at mitigating this phenomenon by providing to the users of the system a unified and shared conceptual view of the domain of interest (ontology), while still enabling the data to be stored in different data sources, which are managed by a relational database. In an OBDA system the link between the data stored at the sources and the ontology is provided through a declarative specification given in terms of a set of mappings. In this work we focus on comparing two of the available systems for OBDA, namely, Mastro and Ontop, by adopting OBDA specifications based on W3C recommendations. We first show how support for R2RML mappings has been integrated in Mastro, which was the last feature missing in order to enable the system to use specifications based solely on W3C recommendations relevant to OBDA. We then proceed in performing a comparison between these systems over two OBDA specifications, the NPD Benchmark and the ACI specification.

    • [cs.AI]The Logical Essentials of Bayesian Reasoning
    Bart Jacobs, Fabio Zanasi
    http://arxiv.org/abs/1804.01193v1

    This chapter offers an accessible introduction to the channel-based approach to Bayesian probability theory. This framework rests on algebraic and logical foundations, inspired by the methodologies of programming language semantics. It offers a uniform, structured and expressive language for describing Bayesian phenomena in terms of familiar programming concepts, like channel, predicate transformation and state transformation. The introduction also covers inference in Bayesian networks, which will be modelled by a suitable calculus of string diagrams.

    • [cs.AI]The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic
    Ole-Christoffer Granmo
    http://arxiv.org/abs/1804.01508v1

    Although simple individually, artificial neurons provide state-of-the-art performance when interconnected in deep networks. Unknown to many, there exists an arguably even simpler and more versatile learning mechanism, namely, the Tsetlin Automaton. Merely by means of a single integer as memory, it learns the optimal action in stochastic environments. In this paper, we introduce the Tsetlin Machine, which solves complex pattern recognition problems with easy-to-interpret propositional formulas, composed by a collective of Tsetlin Automata. To eliminate the longstanding problem of vanishing signal-to-noise ratio, the Tsetlin Machine orchestrates the automata using a novel game. Our theoretical analysis establishes that the Nash equilibria of the game are aligned with the propositional formulas that provide optimal pattern recognition accuracy. This translates to learning without local optima, only global ones. We argue that the Tsetlin Machine finds the propositional formula that provides optimal accuracy, with probability arbitrarily close to unity. In four distinct benchmarks, the Tsetlin Machine outperforms both Neural Networks, SVMs, Random Forests, the Naive Bayes Classifier and Logistic Regression. It further turns out that the accuracy advantage of the Tsetlin Machine increases with lack of data. The Tsetlin Machine has a significant computational performance advantage since both inputs, patterns, and outputs are expressed as bits, while recognition of patterns relies on bit manipulation. The combination of accuracy, interpretability, and computational simplicity makes the Tsetlin Machine a promising tool for a wide range of domains, including safety-critical medicine. Being the first of its kind, we believe the Tsetlin Machine will kick-start completely new paths of research, with a potentially significant impact on the AI field and the applications of AI.

    • [cs.CL]Clinical Concept Embeddings Learned from Massive Sources of Medical Data
    Andrew L. Beam, Benjamin Kompa, Inbar Fried, Nathan P. Palmer, Xu Shi, Tianxi Cai, Isaac S. Kohane
    http://arxiv.org/abs/1804.01486v1

    Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.

    • [cs.CL]Marian: Fast Neural Machine Translation in C++
    Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, Alexandra Birch
    http://arxiv.org/abs/1804.00344v3

    We present Marian, an efficient and self-contained Neural Machine Translation framework with an integrated automatic differentiation engine based on dynamic computation graphs. Marian is written entirely in C++. We describe the design of the encoder-decoder framework and demonstrate that a research-friendly toolkit can achieve high training and translation speed.

    • [cs.CL]Socioeconomic Dependencies of Linguistic Patterns in Twitter: A Multivariate Analysis
    Jacob Levy Abitbol, Márton Karsai, Jean-Philippe Magué, Jean-Pierre Chevrot, Eric Fleury
    http://arxiv.org/abs/1804.01155v1

    Our usage of language is not solely reliant on cognition but is arguably determined by myriad external factors leading to a global variability of linguistic patterns. This issue, which lies at the core of sociolinguistics and is backed by many small-scale studies on face-to-face communication, is addressed here by constructing a dataset combining the largest French Twitter corpus to date with detailed socioeconomic maps obtained from national census in France. We show how key linguistic variables measured in individual Twitter streams depend on factors like socioeconomic status, location, time, and the social network of individuals. We found that (i) people of higher socioeconomic status, active to a greater degree during the daytime, use a more standard language; (ii) the southern part of the country is more prone to use more standard language than the northern one, while locally the used variety or dialect is determined by the spatial distribution of socioeconomic status; and (iii) individuals connected in the social network are closer linguistically than disconnected ones, even after the effects of status homophily have been removed. Our results inform sociolinguistic theory and may inspire novel learning methods for the inference of socioeconomic status of people from the way they tweet.

    • [cs.CR]Cost-Benefit Analysis of Moving-Target Defense in Power Grids
    Subhash Lakshminarayana, David K. Y. Yau
    http://arxiv.org/abs/1804.01472v1

    We study moving-target defense (MTD) that actively perturbs transmission line reactances to thwart stealthy false data injection (FDI) attacks against state estimation in a power grid. Prior work on this topic has proposed MTD based on randomly selected reactance perturbations, but these perturbations cannot guarantee effective attack detection. To address the issue, we present formal design criteria to select MTD reactance perturbations that are truly effective. However, based on a key optimal power flow (OPF) formulation, we find that the effective MTD may incur a non-trivial operational cost that has not hitherto received attention. Accordingly, we characterize important tradeoffs between the MTD's detection capability and its associated required cost. Extensive simulations, using the MATPOWER simulator and benchmark IEEE bus systems, verify and illustrate the proposed design approach that for the first time addresses both key aspects of cost and effectiveness of the MTD.

    • [cs.CV]A Modified Image Comparison Algorithm Using Histogram Features
    Anas M. Al-Oraiqat, Natalya S. Kostyukova
    http://arxiv.org/abs/1804.01142v1

    This article discuss the problem of color image content comparison. Particularly, methods of image content comparison are analyzed, restrictions of color histogram are described and a modified method of images content comparison is proposed. This method uses the color histograms and considers color locations. Testing and analyzing of based and modified algorithms are performed. The modified method shows 97% average precision for a collection containing about 700 images without loss of the advantages of based method, i.e. scale and rotation invariant.

    • [cs.CV]A Multi-Stage Multi-Task Neural Network for Aerial Scene Interpretation and Geolocalization
    Alina Marcu, Dragos Costea, Emil Slusanschi, Marius Leordeanu
    http://arxiv.org/abs/1804.01322v1

    Semantic segmentation and vision-based geolocalization in aerial images are challenging tasks in computer vision. Due to the advent of deep convolutional nets and the availability of relatively low cost UAVs, they are currently generating a growing attention in the field. We propose a novel multi-task multi-stage neural network that is able to handle the two problems at the same time, in a single forward pass. The first stage of our network predicts pixelwise class labels, while the second stage provides a precise location using two branches. One branch uses a regression network, while the other is used to predict a location map trained as a segmentation task. From a structural point of view, our architecture uses encoder-decoder modules at each stage, having the same encoder structure re-used. Furthermore, its size is limited to be tractable on an embedded GPU. We achieve commercial GPS-level localization accuracy from satellite images with spatial resolution of 1 square meter per pixel in a city-wide area of interest. On the task of semantic segmentation, we obtain state-of-the-art results on two challenging datasets, the Inria Aerial Image Labeling dataset and Massachusetts Buildings.

    • [cs.CV]A Segmentation-aware Deep Fusion Network for Compressed Sensing MRI
    Zhiwen Fan, Liyan Sun, Xinghao Ding, Yue Huang, Congbo Cai, John Paisley
    http://arxiv.org/abs/1804.01210v1

    Compressed sensing MRI is a classic inverse problem in the field of computational imaging, accelerating the MR imaging by measuring less k-space data. The deep neural network models provide the stronger representation ability and faster reconstruction compared with "shallow" optimization-based methods. However, in the existing deep-based CS-MRI models, the high-level semantic supervision information from massive segmentation-labels in MRI dataset is overlooked. In this paper, we proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI. The multilayer feature aggregation (MLFA) method is introduced here to fuse all the features from different layers in the segmentation network. Then, the aggregated feature maps containing semantic information are provided to each layer in the reconstruction network with a feature fusion strategy. This guarantees the reconstruction network is aware of the different regions in the image it reconstructs, simplifying the function mapping. We prove the utility of the cross-layer and cross-task information fusion strategy by comparative study. Extensive experiments on brain segmentation benchmark MRBrainS validated that the proposed SADFN model achieves state-of-the-art accuracy in compressed sensing MRI. This paper provides a novel approach to guide the low-level visual task using the information from mid- or high-level task.

    • [cs.CV]A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation
    Guillermo Gallego, Henri Rebecq, Davide Scaramuzza
    http://arxiv.org/abs/1804.01306v1

    We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras.

    • [cs.CV]Btrfly Net: Vertebrae Labelling with Energy-based Adversarial Learning of Local Spine Prior
    Anjany Sekuboyina, Markus Rempfler, Jan Kukačka, Giles Tetteh, Alexander Valentinitsch, Jan S. Kirschke, Bjoern H. Menze
    http://arxiv.org/abs/1804.01307v1

    Robust localisation and identification of vertebrae is an essential part of automated spine analysis. The contribution of this work to the task is two-fold: (1) Inspired by the human expert, we hypothesise that a sagittal and coronal reformation of the spine contain sufficient information for labelling the vertebrae. Thereby, we propose a butterfly-shaped network architecture (termed Btrfly Net) that efficiently combines the information across the reformations. (2) Underpinning the Btrfly net, we present an energy-based adversarial training regime that encodes the local spine structure as an anatomical prior into the network, thereby enabling it to achieve state-of-art performance in all standard metrics on a benchmark dataset of 302 scans without any post-processing during inference.

    • [cs.CV]Building Efficient CNN Architecture for Offline Handwritten Chinese Character Recognition
    Zhiyuan Li, Nanjun Teng, Min Jin, Huaxiang Lu
    http://arxiv.org/abs/1804.01259v1

    Deep convolutional networks based methods have brought great breakthrough in images classification, which provides an end-to-end solution for handwritten Chinese character recognition(HCCR) problem through learning discriminative features automatically. Nevertheless, state-of-the-art CNNs appear to incur huge computation cost, and require the storage of a large number of parameters especially in fully connected layers, which is difficult to deploy such networks into alternative hardware device with the limit of computation amount. To solve the storage problem, we propose a novel technique called Global Weighted Arverage Pooling for reducing the parameters in fully connected layer without loss in accuracy. Besides, we implement a cascaded model in single CNN by adding mid output layer to complete recognition as early as possible, which reduces average inference time significantly. Experiments were performed on the ICDAR-2013 offline HCCR dataset, and it is found that the proposed approach only needs 6.9ms for classfying a chracter image on average, and achieves the state-of-the-art accuracy of 97.1% while requiring only 3.3MB for storage.

    • [cs.CV]Crystal Loss and Quality Pooling for Unconstrained Face Verification and Recognition
    Rajeev Ranjan, Ankan Bansal, Hongyu Xu, Swami Sankaranarayanan, Jun-Cheng Chen, Carlos D. Castillo, Rama Chellappa
    http://arxiv.org/abs/1804.01159v1

    In recent years, the performance of face verification and recognition systems based on deep convolutional neural networks (DCNNs) has significantly improved. A typical pipeline for face verification includes training a deep network for subject classification with softmax loss, using the penultimate layer output as the feature descriptor, and generating a cosine similarity score given a pair of face images or videos. The softmax loss function does not optimize the features to have higher similarity score for positive pairs and lower similarity score for negative pairs, which leads to a performance gap. In this paper, we propose a new loss function, called Crystal Loss, that restricts the features to lie on a hypersphere of a fixed radius. The loss can be easily implemented using existing deep learning frameworks. We show that integrating this simple step in the training pipeline significantly improves the performance of face verification and recognition systems. We achieve state-of-the-art performance for face verification and recognition on challenging LFW, IJB-A, IJB-B and IJB-C datasets over a large range of false alarm rates (10-1 to 10-7).

    • [cs.CV]Density Adaptive Point Set Registration
    Felix Järemo Lawin, Martin Danelljan, Fahad Shahbaz Khan, Per-Erik Forssén, Michael Felsberg
    http://arxiv.org/abs/1804.01495v1

    Probabilistic methods for point set registration have demonstrated competitive results in recent years. These techniques estimate a probability distribution model of the point clouds. While such a representation has shown promise, it is highly sensitive to variations in the density of 3D points. This fundamental problem is primarily caused by changes in the sensor location across point sets. We revisit the foundations of the probabilistic registration paradigm. Contrary to previous works, we model the underlying structure of the scene as a latent probability distribution, and thereby induce invariance to point set density changes. Both the probabilistic model of the scene and the registration parameters are inferred by minimizing the Kullback-Leibler divergence in an Expectation Maximization based framework. Our density-adaptive registration successfully handles severe density variations commonly encountered in terrestrial Lidar applications. We perform extensive experiments on several challenging real-world Lidar datasets. The results demonstrate that our approach outperforms state-of-the-art probabilistic methods for multi-view registration, without the need of re-sampling.

    • [cs.CV]Discriminative Cross-View Binary Representation Learning
    Liu Liu, Hairong Qi
    http://arxiv.org/abs/1804.01233v1

    Learning compact representation is vital and challenging for large scale multimedia data. Cross-view/cross-modal hashing for effective binary representation learning has received significant attention with exponentially growing availability of multimedia content. Most existing cross-view hashing algorithms emphasize the similarities in individual views, which are then connected via cross-view similarities. In this work, we focus on the exploitation of the discriminative information from different views, and propose an end-to-end method to learn semantic-preserving and discriminative binary representation, dubbed Discriminative Cross-View Hashing (DCVH), in light of learning multitasking binary representation for various tasks including cross-view retrieval, image-to-image retrieval, and image annotation/tagging. The proposed DCVH has the following key components. First, it uses convolutional neural network (CNN) based nonlinear hashing functions and multilabel classification for both images and texts simultaneously. Such hashing functions achieve effective continuous relaxation during training without explicit quantization loss by using Direct Binary Embedding (DBE) layers. Second, we propose an effective view alignment via Hamming distance minimization, which is efficiently accomplished by bit-wise XOR operation. Extensive experiments on two image-text benchmark datasets demonstrate that DCVH outperforms state-of-the-art cross-view hashing algorithms as well as single-view image hashing algorithms. In addition, DCVH can provide competitive performance for image annotation/tagging.

    • [cs.CV]Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars
    Ana I. Maqueda, Antonio Loquercio, Guillermo Gallego, Narciso Garcia, Davide Scaramuzza
    http://arxiv.org/abs/1804.01310v1

    Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.

    • [cs.CV]Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset
    Xinpeng Chen, Jingyuan Chen, Lin Ma, Jian Yao, Wei Liu, Jiebo Luo, Tong Zhang
    http://arxiv.org/abs/1804.01373v1

    Nowadays, billions of videos are online ready to be viewed and shared. Among an enormous volume of videos, some popular ones are widely viewed by online users while the majority attract little attention. Furthermore, within each video, different segments may attract significantly different numbers of views. This phenomenon leads to a challenging yet important problem, namely fine-grained video attractiveness prediction. However, one major obstacle for such a challenging problem is that no suitable benchmark dataset currently exists. To this end, we construct the first fine-grained video attractiveness dataset, which is collected from one of the most popular video websites in the world. In total, the constructed FVAD consists of 1,019 drama episodes with 780.6 hours covering different categories and a wide variety of video contents. Apart from the large amount of videos, hundreds of millions of user behaviors during watching videos are also included, such as "view counts", "fast-forward", "fast-rewind", and so on, where "view counts" reflects the video attractiveness while other engagements capture the interactions between the viewers and videos. First, we demonstrate that video attractiveness and different engagements present different relationships. Second, FVAD provides us an opportunity to study the fine-grained video attractiveness prediction problem. We design different sequential models to perform video attractiveness prediction by relying solely on video contents. The sequential models exploit the multimodal relationships between visual and audio components of the video contents at different levels. Experimental results demonstrate the effectiveness of our proposed sequential models with different visual and audio representations, the necessity of incorporating the two modalities, and the complementary behaviors of the sequential prediction models at different levels.

    • [cs.CV]Gaussian Process Uncertainty in Age Estimation as a Measure of Brain Abnormality
    Benjamin Gutierrez Becker, Tassilo Klein, Christian Wachinger
    http://arxiv.org/abs/1804.01296v1

    Multivariate regression models for age estimation are a powerful tool for assessing abnormal brain morphology associated to neuropathology. Age prediction models are built on cohorts of healthy subjects and are built to reflect normal aging patterns. The application of these multivariate models to diseased subjects usually results in high prediction errors, under the hypothesis that neuropathology presents a similar degenerative pattern as that of accelerated aging. In this work, we propose an alternative to the idea that pathology follows a similar trajectory than normal aging. Instead, we propose the use of metrics which measure deviations from the mean aging trajectory. We propose to measure these deviations using two different metrics: uncertainty in a Gaussian process regression model and a newly proposed age weighted uncertainty measure. Consequently, our approach assumes that pathologic brain patterns are different to those of normal aging. We present results for subjects with autism, mild cognitive impairment and Alzheimer's disease to highlight the versatility of the approach to different diseases and age ranges. We evaluate volume, thickness, and VBM features for quantifying brain morphology. Our evaluations are performed on a large number of images obtained from a variety of publicly available neuroimaging databases. Across all features, our uncertainty based measurements yield a better separation between diseased subjects and healthy individuals than the prediction error. Finally, we illustrate differences in the disease pattern to normal aging, supporting the application of uncertainty as a measure of neuropathology.

    • [cs.CV]Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input
    David Harwath, Adrià Recasens, Dídac Surís, Galen Chuang, Antonio Torralba, James Glass
    http://arxiv.org/abs/1804.01452v1

    In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors.

    • [cs.CV]Learning Discriminative Features with Multiple Granularities for Person Re-Identification
    Guanshuo Wang, Yufeng Yuan, Xiong Chen, Jiwei Li, Xi Zhou
    http://arxiv.org/abs/1804.01438v1

    The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method has robustly achieved state-of-the-art performances and outperformed any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we achieve a state-of-the-art result of Rank-1/mAP=96.6%/94.2% after re-ranking.

    • [cs.CV]Normalized Cut Loss for Weakly-supervised CNN Segmentation
    Meng Tang, Abdelaziz Djelouah, Federico Perazzi, Yuri Boykov, Christopher Schroers
    http://arxiv.org/abs/1804.01346v1

    Most recent semantic segmentation methods train deep convolutional neural networks with fully annotated masks requiring pixel-accuracy for good quality training. Common weakly-supervised approaches generate full masks from partial input (e.g. scribbles or seeds) using standard interactive segmentation methods as preprocessing. But, errors in such masks result in poorer training since standard loss functions (e.g. cross-entropy) do not distinguish seeds from potentially mislabeled other pixels. Inspired by the general ideas in semi-supervised learning, we address these problems via a new principled loss function evaluating network output with criteria standard in "shallow" segmentation, e.g. normalized cut. Unlike prior work, the cross entropy part of our loss evaluates only seeds where labels are known while normalized cut softly evaluates consistency of all pixels. We focus on normalized cut loss where dense Gaussian kernel is efficiently implemented in linear time by fast Bilateral filtering. Our normalized cut loss approach to segmentation brings the quality of weakly-supervised training significantly closer to fully supervised methods.

    • [cs.CV]Patch-based Face Recognition using a Hierarchical Multi-label Matcher
    Lingfeng Zhang, Pengfei Dou, Ioannis A Kakadiaris
    http://arxiv.org/abs/1804.01417v1

    This paper proposes a hierarchical multi-label matcher for patch-based face recognition. In signature generation, a face image is iteratively divided into multi-level patches. Two different types of patch divisions and signatures are introduced for 2D facial image and texture-lifted image, respectively. The matcher training consists of three steps. First, local classifiers are built to learn the local matching of each patch. Second, the hierarchical relationships defined between local patches are used to learn the global matching of each patch. Three ways are introduced to learn the global matching: majority voting, l1-regularized weighting, and decision rule. Last, the global matchings of different levels are combined as the final matching. Experimental results on different face recognition tasks demonstrate the effectiveness of the proposed matcher at the cost of gallery generalization. Compared with the UR2D system, the proposed matcher improves the Rank-1 accuracy significantly by 3% and 0.18% on the UHDB31 dataset and IJB-A dataset, respectively.

    • [cs.CV]Prediction and Localization of Student Engagement in the Wild
    Aamir Mustafa, Amanjot Kaur, Love Mehta, Abhinav Dhall
    http://arxiv.org/abs/1804.00858v2

    Student engagement localization can play a key role in designing successful e-learning systems. Facial expressions of an individual in response to stimuli videos provide important cues in estimating variations in engagement level. In this paper we study the association of a \textbf{subject}'s facial expressions with his/her engagement level, as annotated by labelers. We then localize engaging/non-engaging parts in the stimuli videos using a deep multiple instance learning based framework, which can give useful insight into designing Massive Open Online Courses (MOOCs) video material. Recognizing the lack of any publicly available dataset in the domain of user engagement, a new `in the wild' database is created to study subject engagement. The dataset contains 725,000 frames (about 9 hours of recording), 102 videos captured from 75 subjects. The problem of engagement prediction is modeled as a weak label learning problem. The dataset is manually annotated by labelers for four levels of engagement and the predicted labels of videos are correlated with the manual labels. This dataset creation is an effort to facilitate research in various e-learning environments such as intelligent tutoring systems , MOOCs and others.

    • [cs.CV]Representing Videos based on Scene Layouts for Recognizing Agent-in-Place Actions
    Ruichi Yu, Hongcheng Wang, Ang Li, Jingxiao Zheng, Vlad I. Morariu, Larry S. Davis
    http://arxiv.org/abs/1804.01429v1

    We address the recognition of agent-in-place actions, which are associated with agents who perform them and places where they occur, in the context of outdoor home surveillance. We introduce a representation of the geometry and topology of scene layouts so that a network can generalize from the layouts observed in the training set to unseen layouts in the test set. This Layout-Induced Video Representation (LIVR) abstracts away low-level appearance variance and encodes geometric and topological relationships of places in a specific scene layout. LIVR partitions the semantic features of a video clip into different places to force the network to learn place-based feature descriptions; to predict the confidence of each action, LIVR aggregates features from the place associated with an action and its adjacent places on the scene layout. We introduce the Agent-in-Place Action dataset to show that our method allows neural network models to generalize significantly better to unseen scenes.

    • [cs.CV]Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval
    Chao Li, Cheng Deng, Ning Li, Wei Liu, Xinbo Gao, Dacheng Tao
    http://arxiv.org/abs/1804.01223v1

    Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (\textbf{SSAH}) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal hashing in a self-supervised fashion. The primary contribution of this work is that two adversarial networks are leveraged to maximize the semantic correlation and consistency of the representations between different modalities. In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations. Such information guides the feature learning process and preserves the modality relationships in both the common semantic space and the Hamming space. Extensive experiments carried out on three benchmark datasets validate that the proposed SSAH surpasses the state-of-the-art methods.

    • [cs.CV]SketchMate: Deep Hashing for Million-Scale Human Sketch Retrieval
    Peng Xu, Yongye Huang, Tongtong Yuan, Kaiyue Pang, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales, Zhanyu Ma, Jun Guo
    http://arxiv.org/abs/1804.01401v1

    We propose a deep hashing framework for sketch retrieval that, for the first time, works on a multi-million scale human sketch dataset. Leveraging on this large dataset, we explore a few sketch-specific traits that were otherwise under-studied in prior literature. Instead of following the conventional sketch recognition task, we introduce the novel problem of sketch hashing retrieval which is not only more challenging, but also offers a better testbed for large-scale sketch analysis, since: (i) more fine-grained sketch feature learning is required to accommodate the large variations in style and abstraction, and (ii) a compact binary code needs to be learned at the same time to enable efficient retrieval. Key to our network design is the embedding of unique characteristics of human sketch, where (i) a two-branch CNN-RNN architecture is adapted to explore the temporal ordering of strokes, and (ii) a novel hashing loss is specifically designed to accommodate both the temporal and abstract traits of sketches. By working with a 3.8M sketch dataset, we show that state-of-the-art hashing models specifically engineered for static images fail to perform well on temporal sketch data. Our network on the other hand not only offers the best retrieval performance on various code sizes, but also yields the best generalization performance under a zero-shot setting and when re-purposed for sketch recognition. Such superior performances effectively demonstrate the benefit of our sketch-specific design.

    • [cs.CV]Stochastic Adversarial Video Prediction
    Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine
    http://arxiv.org/abs/1804.01523v1

    Being able to predict what may happen in the future requires an in-depth understanding of the physical and causal rules that govern the world. A model that is able to do so has a number of appealing applications, from robotic planning to representation learning. However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging -- the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction. Recently, this has been addressed by two distinct approaches: (a) latent variational variable models that explicitly model underlying stochasticity and (b) adversarially-trained models that aim to produce naturalistic images. However, a standard latent variable model can struggle to produce realistic results, and a standard adversarially-trained model underutilizes latent variables and fails to produce diverse predictions. We show that these distinct methods are in fact complementary. Combining the two produces predictions that look more realistic to human raters and better cover the range of possible futures. Our method outperforms prior and concurrent work in these aspects.

    • [cs.CV]Synthesizing Programs for Images using Reinforced Adversarial Learning
    Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, S. M. Ali Eslami, Oriol Vinyals
    http://arxiv.org/abs/1804.01118v1

    Advances in deep generative networks have led to impressive results in recent years. Nevertheless, such models can often waste their capacity on the minutiae of datasets, presumably due to weak inductive biases in their decoders. This is where graphics engines may come in handy since they abstract away low-level details and represent images as high-level programs. Current methods that combine deep learning and renderers are limited by hand-crafted likelihood or distance functions, a need for large amounts of supervision, or difficulties in scaling their inference algorithms to richer datasets. To mitigate these issues, we present SPIRAL, an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images. The goal of this agent is to fool a discriminator network that distinguishes between real and rendered data, trained with a distributed reinforcement learning setup without any supervision. A surprising finding is that using the discriminator's output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering. To the best of our knowledge, this is the first demonstration of an end-to-end, unsupervised and adversarial inverse graphics agent on challenging real world (MNIST, Omniglot, CelebA) and synthetic 3D datasets.

    • [cs.CV]Towards Deep Learning based Hand Keypoints Detection for Rapid Sequential Movements from RGB Images
    Srujana Gattupalli, Ashwin Ramesh Babu, James Robert Brady, Fillia Makedon, Vassilis Athitsos
    http://arxiv.org/abs/1804.01174v1

    Hand keypoints detection and pose estimation has numerous applications in computer vision, but it is still an unsolved problem in many aspects. An application of hand keypoints detection is in performing cognitive assessments of a subject by observing the performance of that subject in physical tasks involving rapid finger motion. As a part of this work, we introduce a novel hand key-points benchmark dataset that consists of hand gestures recorded specifically for cognitive behavior monitoring. We explore the state of the art methods in hand keypoint detection and we provide quantitative evaluations for the performance of these methods on our dataset. In future, these results and our dataset can serve as a useful benchmark for hand keypoint recognition for rapid finger movements.

    • [cs.CV]Unsupervised Geometry-Aware Representation for 3D Human Pose Estimation
    Helge Rhodin, Mathieu Salzmann, Pascal Fua
    http://arxiv.org/abs/1804.01110v1

    Modern 3D human pose estimation techniques rely on deep networks, which require large amounts of training data. While weakly-supervised methods require less supervision, by utilizing 2D poses or multi-view imagery without annotations, they still need a sufficiently large set of samples with 3D annotations for learning to succeed. In this paper, we propose to overcome this problem by learning a geometry-aware body representation from multi-view images without annotations. To this end, we use an encoder-decoder that predicts an image from one viewpoint given an image from another viewpoint. Because this representation encodes 3D geometry, using it in a semi-supervised setting makes it easier to learn a mapping from it to 3D human pose. As evidenced by our experiments, our approach significantly outperforms fully-supervised methods given the same amount of labeled data, and improves over other semi-supervised methods while using as little as 1% of the labeled data.

    • [cs.CV]Unsupervised Semantic-based Aggregation of Deep Convolutional Features
    Jian Xu, Chunheng Wang, Chengzuo Qi, Cunzhao Shi, Baihua Xiao
    http://arxiv.org/abs/1804.01422v1

    In this paper, we propose a simple but effective semantic-based aggregation (SBA) method. The proposed SBA utilizes the discriminative filters of deep convolutional layers as semantic detectors. Moreover, we propose the effective unsupervised strategy to select some semantic detectors to generate the "probabilistic proposals", which highlight certain discriminative pattern of objects and suppress the noise of background. The final global SBA representation could then be acquired by aggregating the regional representations weighted by the selected "probabilistic proposals" corresponding to various semantic content. Our unsupervised SBA is easy to generalize and achieves excellent performance on various tasks. We conduct comprehensive experiments and show that our unsupervised SBA outperforms the state-of-the-art unsupervised and supervised aggregation methods on image retrieval, place recognition and cloud classification.

    • [cs.CV]Visual Object Categorization Based on Hierarchical Shape Motifs Learned From Noisy Point Cloud Decompositions
    Christian A. Mueller, Andreas Birk
    http://arxiv.org/abs/1804.01117v1

    Object shape is a key cue that contributes to the semantic understanding of objects. In this work we focus on the categorization of real-world object point clouds to particular shape types. Therein surface description and representation of object shape structure have significant influence on shape categorization accuracy, when dealing with real-world scenes featuring noisy, partial and occluded object observations. An unsupervised hierarchical learning procedure is utilized here to symbolically describe surface characteristics on multiple semantic levels. Furthermore, a constellation model is proposed that hierarchically decomposes objects. The decompositions are described as constellations of symbols (shape motifs) in a gradual order, hence reflecting shape structure from local to global, i.e., from parts over groups of parts to entire objects. The combination of this multi-level description of surfaces and the hierarchical decomposition of shapes leads to a representation which allows to conceptualize shapes. An object discrimination has been observed in experiments with seven categories featuring instances with sensor noise, occlusions as well as inter-category and intra-category similarities. Experiments include the evaluation of the proposed description and shape decomposition approach, and comparisons to Fast Point Feature Histograms, a Vocabulary Tree and a neural network-based Deep Learning method. Furthermore, experiments are conducted with alternative datasets which analyze the generalization capability of the proposed approach.

    • [cs.CY]Online Abuse of UK MPs in 2015 and 2017: Perpetrators, Targets, and Topics
    Genevieve Gorrell, Mark Greenwood, Ian Roberts, Diana Maynard, Kalina Bontcheva
    http://arxiv.org/abs/1804.01498v1

    Concerns have reached the mainstream about how social media are affecting political outcomes. One trajectory for this is the exposure of politicians to online abuse. In this paper we use 1.4 million tweets from the months before the 2015 and 2017 UK general elections to explore the abuse directed at politicians. This collection allows us to look at abuse broken down by both party and gender and aimed at specific Members of Parliament. It also allows us to investigate the characteristics of those who send abuse and their topics of interest. Results show that in both absolute and proportional terms, abuse increased substantially in 2017 compared with 2015. Abusive replies are somewhat less directed at women and those not in the currently governing party. Those who send the abuse may be issue-focused, or they may repeatedly target an individual. In the latter category, accounts are more likely to be throwaway. Those sending abuse have a wide range of topical triggers, including borders and terrorism.

    • [cs.DB]NegPSpan: efficient extraction of negative sequential patterns with embedding constraints
    Thomas Guyet, René Quiniou
    http://arxiv.org/abs/1804.01256v1

    Mining frequent sequential patterns consists in extracting recurrent behaviors, modeled as patterns, in a big sequence dataset. Such patterns inform about which events are frequently observed in sequences, i.e. what does really happen. Sometimes, knowing that some specific event does not happen is more informative than extracting a lot of observed events. Negative sequential patterns (NSP) formulate recurrent behaviors by patterns containing both observed events and absent events. Few approaches have been proposed to mine such NSPs. In addition, the syntax and semantics of NSPs differ in the different methods which makes it difficult to compare them. This article provides a unified framework for the formulation of the syntax and the semantics of NSPs. Then, we introduce a new algorithm, NegPSpan, that extracts NSPs using a PrefixSpan depth-first scheme and enabling maxgap constraints that other approaches do not take into account. The formal framework allows for highlighting the differences between the proposed approach wrt to the methods from the literature, especially wrt the state of the art approach eNSP. Intensive experiments on synthetic and real datasets show that NegPSpan can extract meaningful NSPs and that it can process bigger datasets than eNSP thanks to significantly lower memory requirements and better computation times.

    • [cs.DC]A Deterministic Distributed $2$-Approximation for Weighted Vertex Cover in $O(\log n\logΔ/ \log^2\logΔ)$ Rounds
    Ran Ben-Basat, Guy Even, Ken-ichi Kawarabayashi, Gregory Schwartzman
    http://arxiv.org/abs/1804.01308v1

    We present a deterministic distributed $2$-approximation algorithm for the Minimum Weight Vertex Cover problem in the CONGEST model whose round complexity is $O(\log n \log \Delta / \log^2 \log \Delta)$. This improves over the currently best known deterministic 2-approximation implied by [KVY94]. Our solution generalizes the $(2+\epsilon)$-approximation algorithm of [BCS17], improving the dependency on $\epsilon^{-1}$ from linear to logarithmic. In addition, for every $\epsilon=(\log \Delta)^{-c}$, where $c\geq 1$ is a constant, our algorithm computes a $(2+\epsilon)$-approximation in $O(\log \Delta / \log \log \Delta)$~rounds (which is asymptotically optimal).

    • [cs.DC]Designing a Micro-Benchmark Suite to Evaluate gRPC for TensorFlow: Early Experiences
    Rajarshi Biswas, Xiaoyi Lu, Dhabaleswar K. Panda
    http://arxiv.org/abs/1804.01138v1

    Remote procedure call (RPC) is the backbone of many modern distributed systems. Google's gRPC is one of the most popular open source RPC frameworks available in the community. gRPC is the main communication engine for Google's Deep Learning framework TensorFlow. TensorFlow primarily uses gRPC for communicating tensors and administrative tasks among different processes. Tensor updates during the training phase are communication intensive and thus TensorFlow's performance is heavily dependent on the underlying network and the efficacy of the communication engine. Training deep learning models on TensorFlow can take significant time ranging from several minutes to several hours, even several days. Thus system researchers need to devote a lot of time to understand the impact of communication on the overall performance. Clearly, there is lack of benchmarks available for system researchers. Therefore, we propose TF-gRPC-Bench micro-benchmark suite that enables system researchers to quickly understand the impact of the underlying network and communication runtime on deep learning workloads. To achieve this, we first analyze the characteristics of TensorFlow workload over gRPC by training popular deep learning models. Then, we propose three micro-benchmarks that take account these workload characteristics. In addition, we comprehensively evaluate gRPC with TF-gRPC-Bench micro-benchmark suite on different clusters over Ethernet, IPoIB, and RDMA, and present the results.

    • [cs.DC]Maintenance of Strongly Connected Component in Shared-memory Graph
    Muktikanta Sa
    http://arxiv.org/abs/1804.01276v1

    In this paper, we propose a generic concurrent directed graph (for shared memory architecture) that is concurrently being updated by threads adding/deleting vertices and edges. The graph is constructed by the composition of the well known concurrent list-based set data-structure from the literature. Our construction is generic, in the sense that it can be used to obtain various progress guarantees, depending on the granularity of the underlying concurrent set implementation - either blocking or non-blocking. We prove that the proposed construction is linearizable by identifying its linearization points. Finally, we compare the performance of all the variants of the concurrent graph data-structure along with its sequential implementation. We observe that our concurrent graph data-structure mimics the performance of the concurrent list based set.

    • [cs.DC]Optimal Rendezvous ${\mathcal L}$-Algorithms for Asynchronous Mobile Robots with External-Lights
    Takashi Okumura, Koichi Wada, Xavier Défago
    http://arxiv.org/abs/1804.01368v1

    We study the Rendezvous problem for 2 autonomous mobile robots in asynchronous settings with persistent memory called light. It is well known that Rendezvous is impossible in a basic model when robots have no lights, even if the system is semi-synchronous. On the other hand, Rendezvous is possible if robots have lights of various types with a constant number of colors. If robots can observe not only their own lights but also other robots' lights, their lights are called full-light. If robots can only observe the state of other robots' lights, the lights are called external-light. In this paper, we focus on robots with external-lights in asynchronous settings and a particular class of algorithms (called L-algorithms), where an L-algorithm computes a destination based only on the current colors of observable lights. When considering L-algorithms, Rendezvous can be solved by robots with full-lights and 3 colors in general asynchronous settings (called ASYNC) and the number of colors is optimal under these assumptions. In contrast, there exists no L-algorithms in ASYNC with external-lights regardless of the number of colors. In this paper, we consider a fairly large subclass of ASYNC in which Rendezvous can be solved by L-algorithms using external-lights with a finite number of colors, and we show that the algorithms are optimal in the number of colors they use.

    • [cs.DC]Personal Volunteer Computing
    Erick Lavoie, Laurie Hendren
    http://arxiv.org/abs/1804.01482v1

    Since the 1990s, the number of personal computing devices has exploded. Today we collectively own billions of devices whose computing resources are underused the majority of the time. These resources are either time-consuming to leverage or not supported by other mainstream distributed computing approaches. We propose to revisit volunteer computing, where volunteers from around the world contribute the computing resources of their devices for high-profile projects, with a more personal focus along these main dimensions: nature, scope, and length of the project tasks; relationship with the volunteers; ownership of the participating devices; and ownership and complexity of the platform and technologies. We present our position in contrast to other popular distributed computing approaches, in order of increasing similarity. Our approach is both original and it highlights the historical lack of focus on the limiting effects of complexity on the computing platforms: this complexity limits the ability of their users to deploy and repurpose the platforms in evolving contexts. To show the viability of the approach we implemented Pando, a personal volunteer computing platform for the Web. This tool now enables scientists and programmers to quickly use their personal devices and those of their friends for accelerating their computations by writing a single JavaScript function. It can be described with only a few abstractions. Tools built for personal volunteer computing, such as Pando, may reduce the need for acquiring additional devices, and therefore reduce pressure on resources needed to produce them, because they tap in the abundance of computing resources that already exist. Moreover, they may spread the benefits of computing more evenly by serving the needs of communities and individuals that are not served by other mainstream approaches.

    • [cs.HC]Vanlearning: A Machine Learning SaaS Application for People Without Programming Backgrounds
    Chaochen Wu
    http://arxiv.org/abs/1804.01382v1

    Although we have tons of machine learning tools to analyze data, most of them require users have some programming backgrounds. Here we introduce a SaaS application which allows users analyze their data without any coding and even without any knowledge of machine learning. Users can upload, train, predict and download their data by simply clicks their mouses. Our system uses data pre-processor and validator to relieve the computational cost of our server. The simple architecture of Vanlearning helps developers can easily maintain and extend it.

    • [cs.IT]Controllable Identifier Measurements for Private Authentication with Secret Keys
    Onur Günlü, Kittipong Kittichokechai, Rafael F. Schaefer, Giuseppe Caire
    http://arxiv.org/abs/1804.01430v1

    The problem of secret-key based authentication under a privacy constraint on the source sequence is considered. The identifier measurements during authentication are assumed to be controllable via a cost-constrained "action" sequence. Single-letter characterizations of the optimal trade-off among the secret-key rate, storage rate, privacy-leakage rate, and action cost are given for the four problems where noisy or noiseless measurements of the source are enrolled to generate or embed secret keys. The results are relevant for several user-authentication scenarios including physical and biometric authentications with multiple measurements. Our results include, as special cases, new results for secret-key generation and embedding with action-dependent side information without any privacy constraint on the enrolled source sequence.

    • [cs.IT]Nonexistence of generalized bent functions and the quadratic norm form equations
    Chang Lv
    http://arxiv.org/abs/1804.01292v1

    We obtain the nonexistence of generalized bent functions (GBFs) from (\ZZ/t\ZZ)^n to \ZZ/t\ZZ (called type [n,t]), for a large new class. Specifically, by showing certain quadratic norm form equations have no integral points, we obtain the universal nonexistence of GBFs with type [n, 2p^e] for all sufficiently large p with respect to n and (p-1)/\ord_2(p), and by computational methods with a well accepted hypothesis (generalized Riemann hypothesis), we also guarantee some nonexistence results for relative small prime p.

    • [cs.IT]The Capacity of Anonymous Communications
    Hua Sun
    http://arxiv.org/abs/1804.01497v1

    We consider the communication scenario where K transmitters are each connected to a common receiver with an orthogonal noiseless link. One of the transmitters has a message for the receiver, who is prohibited from learning anything in the information theoretic sense about which transmitter sends the message (transmitter anonymity is guaranteed). The capacity of anonymous communications is the maximum number of bits of desired information that can be anonymously communicated per bit of total communication. For this anonymous communication problem over a parallel channel with K transmitters and 1 receiver, we show that the capacity is 1/K, i.e., to communicate 1 bit anonymously, each transmitter must send a 1 bit signal. Further, it is required that each transmitter has at least 1 bit correlated randomness (that is independent of the messages) per message bit and the size of correlated randomness at all K transmitters is at least K-1 bits per message bit.

    • [cs.LG]Feature selection in weakly coherent matrices
    Stephane Chretien, Zhen-Wai Olivier Ho
    http://arxiv.org/abs/1804.01119v1

    A problem of paramount importance in both pure (Restricted Invertibility problem) and applied mathematics (Feature extraction) is the one of selecting a submatrix of a given matrix, such that this submatrix has its smallest singular value above a specified level. Such problems can be addressed using perturbation analysis. In this paper, we propose a perturbation bound for the smallest singular value of a given matrix after appending a column, under the assumption that its initial coherence is not large, and we use this bound to derive a fast algorithm for feature extraction.

    • [cs.LG]Hospital Readmission Prediction - Applying Hierarchical Sparsity Norms for Interpretable Models
    Jialiang Jiang, Sharon Hewner, Varun Chandola
    http://arxiv.org/abs/1804.01188v1

    Hospital readmissions have become one of the key measures of healthcare quality. Preventable readmissions have been identified as one of the primary targets for reducing costs and improving healthcare delivery. However, most data driven studies for understanding readmissions have produced black box classification and predictive models with moderate performance, which precludes them from being used effectively within the decision support systems in the hospitals. In this paper we present an application of structured sparsity-inducing norms for predicting readmission risk for patients based on their disease history and demographics. Most existing studies have focused on hospital utilization, test results, etc., to assign a readmission label to each episode of hospitalization. However, we focus on assigning a readmission risk label to a patient based on their disease history. Our emphasis is on interpreting the models to improve the understanding of the readmission problem. To achieve this, we exploit the domain induced hierarchical structure available for the disease codes which are the features for the classification algorithm. We use a tree based sparsity-inducing regularization strategy that explicitly uses the domain hierarchy. The resulting model not only outperforms standard regularization procedures but is also highly sparse and interpretable. We analyze the model and identify several significant factors that have an effect on readmission risk. Some of these factors conform to existing beliefs, e.g., impact of surgical complications and infections during hospital stay. Other factors, such as the impact of mental disorder and substance abuse on readmission, provide empirical evidence for several pre-existing but unverified hypotheses. The analysis also reveals previously undiscovered connections such as the influence of socioeconomic factors like lack of housing and malnutrition.

    • [cs.LG]Information Maximizing Exploration with a Latent Dynamics Model
    Trevor Barron, Oliver Obst, Heni Ben Amor
    http://arxiv.org/abs/1804.01238v1

    All reinforcement learning algorithms must handle the trade-off between exploration and exploitation. Many state-of-the-art deep reinforcement learning methods use noise in the action selection, such as Gaussian noise in policy gradient methods or $\epsilon$-greedy in Q-learning. While these methods are appealing due to their simplicity, they do not explore the state space in a methodical manner. We present an approach that uses a model to derive reward bonuses as a means of intrinsic motivation to improve model-free reinforcement learning. A key insight of our approach is that this dynamics model can be learned in the latent feature space of a value function, representing the dynamics of the agent and the environment. This method is both theoretically grounded and computationally advantageous, permitting the efficient use of Bayesian information-theoretic methods in high-dimensional state spaces. We evaluate our method on several continuous control tasks, focusing on improving exploration.

    • [cs.LG]Online Multi-Label Classification: A Label Compression Method
    Zahra Ahmadi, Stefan Kramer
    http://arxiv.org/abs/1804.01491v1

    Many modern applications deal with multi-label data, such as functional categorizations of genes, image labeling and text categorization. Classification of such data with a large number of labels and latent dependencies among them is a challenging task, and it becomes even more challenging when the data is received online and in chunks. Many of the current multi-label classification methods require a lot of time and memory, which make them infeasible for practical real-world applications. In this paper, we propose a fast linear label space dimension reduction method that transforms the labels into a reduced encoded space and trains models on the obtained pseudo labels. Additionally, it provides an analytical method to update the decoding matrix which maps the labels into the original space and is used during the test phase. Experimental results show the effectiveness of this approach in terms of running times and the prediction performance over different measures.

    • [cs.LG]Renewal Monte Carlo: Renewal theory based reinforcement learning
    Jayakumar Subramanian, Aditya Mahajan
    http://arxiv.org/abs/1804.01116v1

    In this paper, we present an online reinforcement learning algorithm, called Renewal Monte Carlo (RMC), for infinite horizon Markov decision processes with a designated start state. RMC is a Monte Carlo algorithm and retains the advantages of Monte Carlo methods including low bias, simplicity, and ease of implementation while, at the same time, circumvents their key drawbacks of high variance and delayed (end of episode) updates. The key ideas behind RMC are as follows. First, under any reasonable policy, the reward process is ergodic. So, by renewal theory, the performance of a policy is equal to the ratio of expected discounted reward to the expected discounted time over a regenerative cycle. Second, by carefully examining the expression for performance gradient, we propose a stochastic approximation algorithm that only requires estimates of the expected discounted reward and discounted time over a regenerative cycle and their gradients. We propose two unbiased estimators for evaluating performance gradients---a likelihood ratio based estimator and a simultaneous perturbation based estimator---and show that for both estimators, RMC converges to a locally optimal policy. We generalize the RMC algorithm to post-decision state models and also present a variant that converges faster to an approximately optimal policy. We conclude by presenting numerical experiments on a randomly generated MDP, event-triggered communication, and inventory management.

    • [cs.LG]Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
    Max Simchowitz, Ahmed El Alaoui, Benjamin Recht
    http://arxiv.org/abs/1804.01221v1

    We prove a \emph{query complexity} lower bound for approximating the top $r$ dimensional eigenspace of a matrix. We consider an oracle model where, given a symmetric matrix $\mathbf{M} \in \mathbb{R}^{d \times d}$, an algorithm $\mathsf{Alg}$ is allowed to make $\mathsf{T}$ exact queries of the form $\mathsf{w}^{(i)} = \mathbf{M} \mathsf{v}^{(i)}$ for $i$ in ${1,...,\mathsf{T}}$, where $\mathsf{v}^{(i)}$ is drawn from a distribution which depends arbitrarily on the past queries and measurements ${\mathsf{v}{(j)},\mathsf{w}{(i)}}_{1 \le j \le i-1}$. We show that for every $\mathtt{gap} \in (0,1/2]$, there exists a distribution over matrices $\mathbf{M}$ for which 1) $\mathrm{gap}_r(\mathbf{M}) = \Omega(\mathtt{gap})$ (where $\mathrm{gap}r(\mathbf{M})$ is the normalized gap between the $r$ and $r+1$-st largest-magnitude eigenvector of $\mathbf{M}$), and 2) any algorithm $\mathsf{Alg}$ which takes fewer than $\mathrm{const} \times \frac{r \log d}{\sqrt{\mathtt{gap}}}$ queries fails (with overwhelming probability) to identity a matrix $\widehat{\mathsf{V}} \in \mathbb{R}^{d \times r}$ with orthonormal columns for which $\langle \widehat{\mathsf{V}}, \mathbf{M} \widehat{\mathsf{V}}\rangle \ge (1 - \mathrm{const} \times \mathtt{gap})\sum{i=1}^r \lambda_i(\mathbf{M})$. Our bound requires only that $d$ is a small polynomial in $1/\mathtt{gap}$ and $r$, and matches the upper bounds of Musco and Musco '15. Moreover, it establishes a strict separation between convex optimization and \emph{randomized}, "strict-saddle" non-convex optimization of which PCA is a canonical example: in the former, first-order methods can have dimension-free iteration complexity, whereas in PCA, the iteration complexity of gradient-based methods must necessarily grow with the dimension.

    • [cs.LG]Universal Planning Networks
    Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, Chelsea Finn
    http://arxiv.org/abs/1804.00645v2

    A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities.

    • [cs.NE]When Hypermutations and Ageing Enable Artificial Immune Systems to Outperform Evolutionary Algorithms
    Dogan Corus, Pietro S. Oliveto, Donya Yazdani
    http://arxiv.org/abs/1804.01314v1

    We present a time complexity analysis of the Opt-IA artificial immune system (AIS). We first highlight the power and limitations of its distinguishing operators (i.e., hypermutations with mutation potential and ageing) by analysing them in isolation. Recent work has shown that ageing combined with local mutations can help escape local optima on a dynamic optimisation benchmark function. We generalise this result by rigorously proving that, compared to evolutionary algorithms (EAs), ageing leads to impressive speed-ups on the standard Cliff benchmark function both when using local and global mutations. Unless the stop at first constructive mutation (FCM) mechanism is applied, we show that hypermutations require exponential expected runtime to optimise any function with a polynomial number of optima. If instead FCM is used, the expected runtime is at most a linear factor larger than the upper bound achieved for any random local search algorithm using the artificial fitness levels method. Nevertheless, we prove that algorithms using hypermutations can be considerably faster than EAs at escaping local optima. An analysis of the complete Opt-IA reveals that it is efficient on the previously considered functions and highlights problems where the use of the full algorithm is crucial. We complete the picture by presenting a class of functions for which Opt-IA fails with overwhelming probability while standard EAs are efficient.

    • [cs.SI]Information Propagation Analysis of Social Network Using the Universality of Random Matrix
    Yusuke Sakumoto, Tsukasa Kameyama, Chisa Takano, Masaki Aida
    http://arxiv.org/abs/1804.01254v1

    Spectral graph theory gives an algebraical approach to analyze the dynamics of a network by using the matrix that represents the network structure. However, it is not easy for social networks to apply the spectral graph theory because the matrix elements cannot be given exactly to represent the structure of a social network. The matrix element should be set on the basis of the relationship between persons, but the relationship cannot be quantified accurately from obtainable data (e.g., call history and chat history). To get around this problem, we utilize the universality of random matrix with the feature of social networks. As such random matrix, we use normalized Laplacian matrix for a network where link weights are randomly given. In this paper, we first clarify that the universality (i.e., the Wigner semicircle law) of the normalized Laplacian matrix appears in the eigenvalue frequency distribution regardless of the link weight distribution. Then, we analyze the information propagation speed by using the spectral graph theory and the universality of the normalized Laplacian matrix. As the results, we show that the worst-case speed of the information propagation changes at most 2 if the structure (i.e., relationship among people) of a social network changes.

    • [cs.SI]Real-time Detection of Content Polluters in Partially Observable Twitter Networks
    Mehwish Nasim, Andrew Nguyen, Nick Lothian, Robert Cope, Lewis Mitchell
    http://arxiv.org/abs/1804.01235v1

    Content polluters, or bots that hijack a conversation for political or advertising purposes are a known problem for event prediction, election forecasting and when distinguishing real news from fake news in social media data. Identifying this type of bot is particularly challenging, with state-of-the-art methods utilising large volumes of network data as features for machine learning models. Such datasets are generally not readily available in typical applications which stream social media data for real-time event prediction. In this work we develop a methodology to detect content polluters in social media datasets that are streamed in real-time. Applying our method to the problem of civil unrest event prediction in Australia, we identify content polluters from individual tweets, without collecting social network or historical data from individual accounts. We identify some peculiar characteristics of these bots in our dataset and propose metrics for identification of such accounts. We then pose some research questions around this type of bot detection, including: how good Twitter is at detecting content polluters and how well state-of-the-art methods perform in detecting bots in our dataset.

    • [cs.SI]WhatsApp, Doc? A First Look at WhatsApp Public Group Data
    Kiran Garimella, Gareth Tyson
    http://arxiv.org/abs/1804.01473v1

    In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community.

    • [cs.SY]Real-Time Prediction of the Duration of Distribution System Outages
    Aaron Jaech, Baosen Zhang, Mari Ostendorf, Daniel S. Kirschen
    http://arxiv.org/abs/1804.01189v1

    This paper addresses the problem of predicting duration of unplanned power outages, using historical outage records to train a series of neural network predictors. The initial duration prediction is made based on environmental factors, and it is updated based on incoming field reports using natural language processing to automatically analyze the text. Experiments using 15 years of outage records show good initial results and improved performance leveraging text. Case studies show that the language processing identifies phrases that point to outage causes and repair steps.

    • [econ.EM]Should We Condition on the Test for Pre-trends in Difference-in-Difference Designs?
    Jonathan Roth
    http://arxiv.org/abs/1804.01208v1

    The common practice in difference-in-difference (DiD) designs is to check for parallel trends prior to treatment assignment, yet typical estimation and inference does not account for the fact that this test has occurred. I analyze the properties of the traditional DiD estimator conditional on having passed (i.e. not rejected) the test for parallel pre-trends. When the DiD design is valid and the test for pre-trends confirms it, the typical DiD estimator is unbiased, but traditional standard errors are overly conservative. Additionally, there exists an alternative unbiased estimator that is more efficient than the traditional DiD estimator under parallel trends. However, when in population there is a non-zero pre-trend but we fail to reject the hypothesis of parallel pre-trends, the DiD estimator is generally biased relative to the population DiD coefficient. Moreover, if the trend is monotone, then under reasonable assumptions the bias from conditioning exacerbates the bias relative to the true treatment effect. I propose new estimation and inference procedures that account for the test for parallel trends, and compare their performance to that of the traditional estimator in a Monte Carlo simulation.

    • [eess.IV]Looking at Hands in Autonomous Vehicles: A ConvNet Approach using Part Affinity Fields
    Kevan Yuen, Mohan M. Trivedi
    http://arxiv.org/abs/1804.01176v1

    In the context of autonomous driving, where humans may need to take over in the event where the computer may issue a takeover request, a key step towards driving safety is the monitoring of the hands to ensure the driver is ready for such a request. This work, focuses on the first step of this process, which is to locate the hands. Such a system must work in real-time and under varying harsh lighting conditions. This paper introduces a fast ConvNet approach, based on the work of original work of OpenPose for full body joint estimation. The network is modified with fewer parameters and retrained using our own day-time naturalistic autonomous driving dataset to estimate joint and affinity heatmaps for driver & passenger's wrist and elbows, for a total of 8 joint classes and part affinity fields between each wrist-elbow pair. The approach runs real-time on real-world data at 40 fps on multiple drivers and passengers. The system is extensively evaluated both quantitatively and qualitatively, showing at least 95% detection performance on joint localization and arm-angle estimation.

    • [eess.SP]Estimation of Channel Parameters in a Multipath Environment via Optimizing Highly Oscillatory Error-Functions Using a Genetic Algorithm
    Amir Ebrahimi, Ardavan Rahimian
    http://arxiv.org/abs/1804.01455v1

    Channel estimation is of crucial importance for tomorrow's wireless mobile communication systems. This paper focuses on the solution of channel parameters estimation problem in a scenario involving multiple paths in the presence of additive white Gaussian noise. We assumed that number of paths in the multipath environment is known and the transmitted signal consists of attenuated and delayed replicas of a known transient signal. In order to determine the maximum likelihood estimates one has to solve a complicated optimization problem. Genetic Algorithms (GA) are well known for their robustness in solving complex optimization problems. A GA is considered to extract channel parameters to minimize the derived error-function. The solution is based on the maximum-likelihood estimation of the channel parameters. Simulation results also demonstrate GA's robustness to channel parameters estimation errors.

    • [math.CO]Pure gaps on curves with many rational places
    Daniele Bartoli, Ariane M. Masuda, Maria Montanucci, Luciane Quoos
    http://arxiv.org/abs/1804.01086v1

    We consider the algebraic curve defined by $y^m = f(x)$ where $m \geq 2$ and $f(x)$ is a rational function over $\mathbb{F}_q$. We extend the concept of pure gap to {\bf c}-gap and obtain a criterion to decide when an $s$-tuple is a {\bf c}-gap at $s$ rational places on the curve. As an application, we obtain many families of pure gaps at two rational places on curves with many rational places.

    • [math.OC]Sparse non-negative super-resolution - simplified and stabilised
    Armin Eftekhari, Jared Tanner, Andrew Thompson, Bogdan Toader, Hemant Tyagi
    http://arxiv.org/abs/1804.01490v1

    The convolution of a discrete measure, $x=\sum_{i=1}^ka_i\delta_{t_i}$, with a local window function, $\phi(s-t)$, is a common model for a measurement device whose resolution is substantially lower than that of the objects being observed. Super-resolution concerns localising the point sources ${a_i,t_i}{i=1}^k$ with an accuracy beyond the essential support of $\phi(s-t)$, typically from $m$ samples $y(s_j)=\sum{i=1}^k a_i\phi(s_j-t_i)+\eta_j$, where $\eta_j$ indicates an inexactness in the sample value. We consider the setting of $x$ being non-negative and seek to characterise all non-negative measures approximately consistent with the samples. We first show that $x$ is the unique non-negative measure consistent with the samples provided the samples are exact, i.e. $\eta_j=0$, $m\ge 2k+1$ samples are available, and $\phi(s-t)$ generates a Chebyshev system. This is independent of how close the sample locations are and {\em does not rely on any regulariser beyond non-negativity}; as such, it extends and clarifies the work by Schiebinger et al. and De Castro et al., who achieve the same results but require a total variation regulariser, which we show is unnecessary. Moreover, we characterise non-negative solutions $\hat{x}$ consistent with the samples within the bound $\sum_{j=1}m\eta_j2\le \delta^2$. Any such non-negative measure is within ${\mathcal O}(\delta^{1/7})$ of the discrete measure $x$ generating the samples in the generalised Wasserstein distance, converging to one another as $\delta$ approaches zero. We also show how to make these general results, for windows that form a Chebyshev system, precise for the case of $\phi(s-t)$ being a Gaussian window. The main innovation of these results is that non-negativity alone is sufficient to localise point sources beyond the essential sensor resolution.

    • [math.PR]A Fixed Point Theorem for Iterative Random Contraction Operators over Banach Spaces
    Abhishek Gupta, Rahul Jain, Peter Glynn
    http://arxiv.org/abs/1804.01195v1

    Consider a contraction operator $T$ over a Banach space $\ALP X$ with a fixed point $x^\star$. Assume that one can approximate the operator $T$ by a random operator $\hat T^N$ using $N\in\Na$ independent and identically distributed samples of a random variable. Consider the sequence $(\hat X^N_k){k\in\Na}$, which is generated by $\hat X^N{k+1} = \hat T^N(\hat X^N_k)$ and is a random sequence. In this paper, we prove that under certain conditions on the random operator, (i) the distribution of $\hat X^N_k$ converges to a unit mass over $x^\star$ as $k$ and $N$ goes to infinity, and (ii) the probability that $\hat X^N_k$ is far from $x^\star$ as $k$ goes to infinity can be made arbitrarily small by an appropriate choice of $N$. We also find a lower bound on the probability that $\hat X^N_k$ is far from $x^\star$ as $k\rightarrow \infty$. We apply the result to study probabilistic convergence of certain randomized optimization and value iteration algorithms.

    • [math.ST]Near-Optimality Recovery of Linear and N-Convex Functions on Unions of Convex Sets
    Anatoli Juditsky, Arkadi Nemirovski
    http://arxiv.org/abs/1804.00355v2

    In this paper, following the line of research on "statistical inference via convex optimization" (D. Donoho, Ann. Stat. 22(1) (1994), A. Juditsky, A. Nemirovski, Ann. Stat. 37(5a) (2009), A. Goldenshluger, A. Juditsky, A. Nemirovski, Electr. J. Stat. 9(2) (2015)), we build provably near-optimal, in the minimax sense, estimates of linear forms (or more general "N-convex" functionals, the simplest example being the maximum of several fractional-linear functions) of unknown "signal" known to belong to the union of finitely many convex compact sets from indirect noisy observations of the signal. Our main assumption is that the observation scheme in question is good in the sense of A. Goldenshluger, A. Juditsky, A. Nemirovski, Electr. J. Stat. 9(2) (2015), the simplest example being the Gaussian observation scheme, where the observation is the sum of linear image of the signal and the standard Gaussian noise. The proposed estimates, same as upper bounds on their worst-case risks, stem from solutions to explicit convex optimization problems, making the estimates "computation-friendly."

    • [math.ST]On inference validity of weighted U-statistics under data heterogeneity
    Fang Han, Tianchen Qian
    http://arxiv.org/abs/1804.00034v1

    Motivated by challenges on studying a new correlation measurement being popularized in evaluating online ranking algorithms' performance, this manuscript explores the validity of uncertainty assessment for weighted U-statistics. Without any commonly adopted assumption, we verify Efron's bootstrap and a new resampling procedure's inference validity. Specifically, in its full generality, our theory allows both kernels and weights asymmetric and data points not identically distributed, which are all new issues that historically have not been addressed. For achieving strict generalization, for example, we have to carefully control the order of the "degenerate" term in U-statistics which are no longer degenerate under the empirical measure for non-i.i.d. data. Our result applies to the motivating task, giving the region at which solid statistical inference can be made.

    • [math.ST]On the Existence of Uniformly Most Powerful Bayesian Tests With Application to Non-Central Chi-Squared Tests
    Amir Nikooienejad, Valen E. Johnson
    http://arxiv.org/abs/1804.01187v1

    Uniformly most powerful Bayesian tests (UMPBTs) are an objective class of Bayesian hypothesis tests that can be considered the Bayesian counterpart of classical uniformly most powerful tests. Unfortunately, UMPBTs have only been exposed for application in one parameter exponential family models. The purpose of this article is to describe methodology for deriving UMPBTs for a larger class of tests. Specifically, we introduce sufficient conditions for the existence of UMPBTs and propose a unified approach for their derivation. An important application of our methodology is the extension of UMPBTs to testing whether the non-centrality parameter of a chi-squared distribution is zero. The resulting tests have broad applicability, providing default alternative hypotheses to compute Bayes factors in, for example, Pearson's chi-squared test for goodness-of-fit, tests of independence in contingency tables, and likelihood ratio, score and Wald tests. We close with a brief comparison of our methodology to the Karlin-Rubin theorem.

    • [math.ST]The noise barrier and the large signal bias of the Lasso and other convex estimators
    Pierre C Bellec
    http://arxiv.org/abs/1804.01230v1

    Convex estimators such as the Lasso, the matrix Lasso and the group Lasso have been studied extensively in the last two decades, demonstrating great success in both theory and practice. Two quantities are introduced, the noise barrier and the large scale bias, that provides insights on the performance of these convex regularized estimators. It is now well understood that the Lasso achieves fast prediction rates, provided that the correlations of the design satisfy some Restricted Eigenvalue or Compatibility condition, and provided that the tuning parameter is large enough. Using the two quantities introduced in the paper, we show that the compatibility condition on the design matrix is actually unavoidable to achieve fast prediction rates with the Lasso. The Lasso must incur a loss due to the correlations of the design matrix, measured in terms of the compatibility constant. This results holds for any design matrix, any active subset of covariates, and any tuning parameter. It is now well known that the Lasso enjoys a dimension reduction property: the prediction error is of order $\lambda\sqrt k$ where $k$ is the sparsity; even if the ambient dimension $p$ is much larger than $k$. Such results require that the tuning parameters is greater than some universal threshold. We characterize sharp phase transitions for the tuning parameter of the Lasso around a critical threshold dependent on $k$. If $\lambda$ is equal or larger than this critical threshold, the Lasso is minimax over $k$-sparse target vectors. If $\lambda$ is equal or smaller than critical threshold, the Lasso incurs a loss of order $\sigma\sqrt k$ --which corresponds to a model of size $k$-- even if the target vector has fewer than $k$ nonzero coefficients. Remarkably, the lower bounds obtained in the paper also apply to random, data-driven tuning parameters. The results extend to convex penalties beyond the Lasso.

    • [nlin.AO]Self-Organization and Artificial Life: A Review
    Carlos Gershenson, Vito Trianni, Justin Werfel, Hiroki Sayama
    http://arxiv.org/abs/1804.01144v1

    Self-organization has been an important concept within a number of disciplines, which Artificial Life (ALife) also has heavily utilized since its inception. The term and its implications, however, are often confusing or misinterpreted. In this work, we provide a mini-review of self-organization and its relationship with ALife, aiming at initiating discussions on this important topic with the interested audience. We first articulate some fundamental aspects of self-organization, outline its usage, and review its applications to ALife within its soft, hard, and wet domains. We also provide perspectives for further research.

    • [physics.soc-ph]Spatial diffusion and churn of social media
    Balázs Lengyel, Riccardo Di Clemente, János Kertész, Marta C. González
    http://arxiv.org/abs/1804.01349v1

    Innovative ideas, products or services spread on social networks that, in the digital age, are maintained to large extent via telecommunication tools such as emails or social media. One of the intriguing puzzles in social contagion under such conditions is the role of physical space. It is not understood either how geography influences the disappearance of products at the end of their life-cycle. In this paper, we utilize a unique dataset compiled from a Hungarian on-line social network (OSN) to uncover novel features in the spatial adoption and churn of digital technologies. The studied OSN was established in 2002 and failed in international competition about a decade later. We find that early adopter towns churn early; while individuals tend to follow the churn of nearby friends and are less influenced by the churn of distant contacts. An agent-based Bass Diffusion Model describes the process how the product gets adopted in the overall population. We show the limitations of the model regarding the spatial aspects of diffusion and identify the directions of model corrections. Assortativity of adoption time, urban scaling of adoption over the product life-cycle and a distance decay function of diffusion probability are the main factors that spatial diffusion models need to account for.

    • [q-bio.QM]An integration of fast alignment and maximum-likelihood methods for electron subtomogram averaging and classification
    Yixiu Zhao, Xiangrui Zeng, Qiang Guo, Min Xu
    http://arxiv.org/abs/1804.01203v1

    Motivation: Cellular Electron CryoTomography (CECT) is an emerging 3D imaging technique that visualizes subcellular organization of single cells at submolecular resolution and in near-native state. CECT captures large numbers of macromolecular complexes of highly diverse structures and abundances. However, the structural complexity and imaging limits complicate the systematic de novo structural recovery and recognition of these macromolecular complexes. Efficient and accurate reference-free subtomogram averaging and classification represent the most critical tasks for such analysis. Existing subtomogram alignment based methods are prone to the missing wedge effects and low signal-to-noise ratio (SNR). Moreover, existing maximum-likelihood based methods rely on integration operations, which are in principle computationally infeasible for accurate calculation. Results: Built on existing works, we propose an integrated method, Fast Alignment Maximum Likelihood method (FAML), which uses fast subtomogram alignment to sample sub-optimal rigid transformations. The transformations are then used to approximate integrals for maximum-likelihood update of subtomogram averages through expectation-maximization algorithm. Our tests on simulated and experimental subtomograms showed that, compared to our previously developed fast alignment method (FA), FAML is significantly more robust to noise and missing wedge effects with moderate increases of computation cost.Besides, FAML performs well with significantly fewer input subtomograms when the FA method fails. Therefore, FAML can serve as a key component for improved construction of initial structural models from macromolecules captured by CECT.

    • [stat.AP]A Mixture Model to Detect Edges in Sparse Co-expression Graphs
    Haim Bar, Seojin Bang
    http://arxiv.org/abs/1804.01185v1

    In the early days of microarray data, the medical and statistical communities focused on gene-level data, and particularly on finding differentially expressed genes. This usually involved making a simplifying assumption that genes are independent, which made likelihood derivations feasible and allowed for relatively simple implementations. However, this is not a realistic assumption, and in recent years the scope has expanded, and has come to include pathway and 'gene set' analysis in an attempt to understand the relationships between genes. In this paper we develop a method to recover a gene network's structure from co-expression data, which we measure in terms of normalized Pearson's correlation coefficients between gene pairs. We treat these co-expression measurements as weights in the complete graph in which nodes correspond to genes. We assume that the network is sparse and that only a small fraction of the putative edges are included (non-null' edges). To decide which edges exist in the gene network, we fit three-component mixture model such that the observed weights ofnull edges' follow a normal distribution with mean 0, and the non-null edges follow a mixture of two log-normal distributions, one for positively- and one for negatively-correlated pairs. We show that this so-called $L_2N$ mixture model outperforms other methods in terms of power to detect edges. We also show that using the $L_2N$ model allows for the control of the false discovery rate. Importantly, the method makes no assumptions about the true network structure.

    • [stat.AP]A latent variable model to measure exposure diversification in the Austrian interbank market
    Juraj Hledik, Riccardo Rastelli
    http://arxiv.org/abs/1804.01367v1

    We propose a statistical model for weighted temporal networks capable of measuring the level of heterogeneity in a financial system. Our model focuses on the level of diversification of financial institutions; that is, whether they are more inclined to distribute their assets equally among partners, or if they rather concentrate their commitment towards a limited number of institutions. Crucially, a Markov property is introduced to capture time dependencies and to make our measures comparable across time. We apply the model on an original dataset of Austrian interbank exposures. The temporal span encompasses the onset and development of the financial crisis in 2008 as well as the beginnings of European sovereign debt crisis in 2011. Our analysis highlights an overall increasing trend for network homogeneity, whereby core banks have a tendency to distribute their market exposures more equally across their partners.

    • [stat.AP]On approximate least squares estimators of parameters on one-dimensional chirp signal
    Rhythm Grover, Debasis Kundu, Amit Mitra
    http://arxiv.org/abs/1804.01269v1

    Chirp signals are quite common in many natural and man-made systems like audio signals, sonar, radar etc. Estimation of the unknown parameters of a signal is a fundamental problem in statistical signal processing. Recently, Kundu and Nandi \cite{2008} studied the asymptotic properties of least squares estimators of the unknown parameters of a simple chirp signal model under the assumption of stationary noise. In this paper, we propose periodogram-type estimators called the approximate least squares estimators to estimate the unknown parameters and study the asymptotic properties of these estimators under the same error assumptions. It is observed that the approximate least squares estimators are strongly consistent and asymptotically equivalent to the least squares estimators. Similar to the periodogram estimators, these estimators can also be used as initial guesses to find the least squares estimators of the unknown parameters. We perform some numerical simulations to see the performance of the proposed estimators and compare them with the least squares estimators and the estimators proposed by Lahiri et al., \cite{2013}. We have analysed two real data sets for illustrative purposes.

    • [stat.AP]Predictive and Prescriptive Analytics for Location Selection of Add-on Retail Products
    Teng Huang, David Bergman, Ram Gopal
    http://arxiv.org/abs/1804.01182v1

    In this paper, we study an analytical approach to selecting expansion locations for retailers selling add-on products whose demand is derived from the demand of another base product. Demand for the add-on product is realized only as a supplement to the demand of the base product. In our context, either of the two products could be subject to spatial autocorrelation where demand at a given location is impacted by demand at other locations. Using data from an industrial partner selling add-on products, we build predictive models for understanding the derived demand of the add-on product and establish an optimization framework for automating expansion decisions to maximize expected sales. Interestingly, spatial autocorrelation and the complexity of the predictive model impact the complexity and the structure of the prescriptive optimization model. Our results indicate that the models formulated are highly effective in predicting add-on product sales, and that using the optimization framework built on the predictive model can result in substantial increases in expected sales over baseline policies.

    • [stat.ME]Beta regression control chart for monitoring fractions and proportions
    Fábio Mariano Bayer, Catia Michele Tondolo, Fernanda Maria Müller
    http://arxiv.org/abs/1804.01454v1

    Regression control charts are usually used to monitor variables of interest that are related to control variables. However, for fraction and/or proportion data, the use of standard regression control charts may not be adequate, since the linear regression model assumes the normality of the interest variable. To work around this problem, we propose the beta regression control chart (BRCC). The BRCC is useful for monitoring fraction, rate and/or proportion data sets when they are related to control variables. The proposed control chart assumes that the mean and dispersion parameters of beta distributed variables are related to the exogenous variables, being modeled using regression structures. The BRCC is numerically assessed through an extensive Monte Carlo simulation study, showing good performance in terms of average run length (ARL). Two applications to real data are presented, evidencing the practical applicability of the proposed method.

    • [stat.ME]Model assessment for time series dynamics using copula spectral densities: a graphical tool
    Stefan Birr, Tobias Kley, Stanislav Volgushev
    http://arxiv.org/abs/1804.01440v1

    Finding parametric models that accurately describe the dependence structure of observed data is a central task in the analysis of time series. Classical frequency domain methods provide a popular set of tools for fitting and diagnostics of time series models, but their applicability is seriously impacted by the limitations of covariances as a measure of dependence. Motivated by recent developments of frequency domain methods that are based on copulas instead of covariances, we propose a novel graphical tool that allows to access the quality of time series models for describing dependencies that go beyond linearity. We provide a thorough theoretical justification of our approach and show in simulations that it can successfully distinguish between subtle differences of time series dynamics, including non-linear dynamics which result from GARCH and EGARCH models.

    • [stat.ME]Robust Discrimination between Long-Range Dependence and a Change in Mean
    Carina Gerstenberger
    http://arxiv.org/abs/1804.01268v1

    In this paper we introduce a robust to outliers Wilcoxon change-point testing procedure, for distinguishing between short-range dependent time series with a change in mean at unknown time and stationary long-range dependent time series. We establish the asymptotic distribution of the test statistic under the null hypothesis for $L_1$ near epoch dependent processes and show its consistency under the alternative. The Wilcoxon-type testing procedure similarly as the CUSUM-type testing procedure of Berkes, Horv'ath, Kokoszka and Shao (2006), requires estimation of the location of a possible change-point, and then using pre- and post-break subsamples to discriminate between short and long-range dependence. A simulation study examines the empirical size and power of the Wilcoxon-type testing procedure in standard cases and with disturbances by outliers. It shows that in standard cases the Wilcoxon-type testing procedure behaves equally well as the CUSUM-type testing procedure but outperforms it in presence of outliers.

    • [stat.ME]Shape-Constrained Univariate Density Estimation
    Sutanoy Dasgupta, Debdeep Pati, Ian H. Jermyn, Anuj Srivastava
    http://arxiv.org/abs/1804.01458v1

    While the problem of estimating a probability density function (pdf) from its observations is classical, the estimation under additional shape constraints is both important and challenging. We introduce an efficient, geometric approach for estimating pdfs given the number of its modes. This approach explores the space of constrained pdf's using an action of the diffeomorphism group that preserves their shapes. It starts with an initial template, with the desired number of modes and arbitrarily chosen heights at the critical points, and transforms it via: (1) composition by diffeomorphisms and (2) normalization to obtain the final density estimate. The search for optimal diffeomorphism is performed under the maximum-likelihood criterion and is accomplished by mapping diffeomorphisms to the tangent space of a Hilbert sphere, a vector space whose elements can be expressed using an orthogonal basis. This framework is first applied to shape-constrained univariate, unconditional pdf estimation and then extended to conditional pdf estimation. We derive asymptotic convergence rates of the estimator and demonstrate this approach using a synthetic dataset involving speed distribution for different traffic flow on Californian driveways.

    • [stat.ME]Variable selection using pseudo-variables
    Wenhao Hu, Eric Laber, Leonard Stefanski
    http://arxiv.org/abs/1804.01201v1

    Penalized regression has become a standard tool for model building across a wide range of application domains. Common practice is to tune the amount of penalization to tradeoff bias and variance or to optimize some other measure of performance of the estimated model. An advantage of such automated model-building procedures is that their operating characteristics are well-defined, i.e., completely data-driven, and thereby they can be systematically studied. However, in many applications it is desirable to incorporate domain knowledge into the model building process; one way to do this is to characterize each model along the solution path of a penalized regression estimator in terms of an operating characteristic that is meaningful within a domain context and then to allow domain experts to choose from among these models using these operating characteristics as well as other factors not available to the estimation algorithm. We derive an estimator of the false selection rate for each model along the solution path using a novel variable addition method. The proposed estimator applies to both fixed and random designs and allows for $p \gg n$. The proposed estimator can be used to estimate a model with a pre-specified false selection rate or can be overlaid on the solution path to facilitate interactive model exploration. We characterize the asymptotic behavior of the proposed estimator in the case of a linear model under a fixed design; however, simulation experiments show that the proposed estimator provides consistently more accurate estimates of the false selection rate than competing methods across a wide range of models.

    • [stat.ML]An Imprecise Probabilistic Estimator for the Transition Rate Matrix of a Continuous-Time Markov Chain
    Thomas Krak, Alexander Erreygers, Jasper De Bock
    http://arxiv.org/abs/1804.01330v1

    We consider the problem of estimating the transition rate matrix of a continuous-time Markov chain from a finite-duration realisation of this process. We approach this problem in an imprecise probabilistic framework, using a set of prior distributions on the unknown transition rate matrix. The resulting estimator is a set of transition rate matrices that, for reasons of conjugacy, is easy to find. To determine the hyperparameters for our set of priors, we reconsider the problem in discrete time, where we can use the well-known Imprecise Dirichlet Model. In particular, we show how the limit of the resulting discrete-time estimators is a continuous-time estimator. It corresponds to a specific choice of hyperparameters and has an exceptionally simple closed-form expression.

    • [stat.ML]Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-iid Data
    William Herlands, Edward McFowland III, Andrew Gordon Wilson, Daniel B. Neill
    http://arxiv.org/abs/1804.01466v1

    Identifying anomalous patterns in real-world data is essential for understanding where, when, and how systems deviate from their expected dynamics. Yet methods that separately consider the anomalousness of each individual data point have low detection power for subtle, emerging irregularities. Additionally, recent detection techniques based on subset scanning make strong independence assumptions and suffer degraded performance in correlated data. We introduce methods for identifying anomalous patterns in non-iid data by combining Gaussian processes with novel log-likelihood ratio statistic and subset scanning techniques. Our approaches are powerful, interpretable, and can integrate information across multiple data streams. We illustrate their performance on numeric simulations and three open source spatiotemporal datasets of opioid overdose deaths, 311 calls, and storm reports.

    相关文章

      网友评论

        本文标题:今日学术视野(2018.4.6)

        本文链接:https://www.haomeiwen.com/subject/ebsjhftx.html