美文网首页
2018-07-24

2018-07-24

作者: hzyido | 来源:发表于2018-07-24 14:46 被阅读66次

    论文笔记之Learning Convolutional Neural Networks for Graphs - CSDN博客
    https://blog.csdn.net/bvl10101111/article/details/53484620

    智能立方
    https://mp.weixin.qq.com/s/a8xW33fff7oQGOMNJc99GA

    [1807.08146] Accurate Energy-Efficient Power Control for Uplink NOMA Systems under Delay Constraint
    https://arxiv.org/abs/1807.08146

    [1807.08108] Simultaneous Adversarial Training - Learn from Others Mistakes
    https://arxiv.org/abs/1807.08108

    [1807.07984] Attention Models in Graphs: A Survey
    https://arxiv.org/abs/1807.07984
    Graph-structured data arise naturally in many different application domains. By representing data as graphs, we can capture entities (i.e., nodes) as well as their relationships (i.e., edges) with each other. Many useful insights can be derived from graph-structured data as demonstrated by an ever-growing body of work focused on graph mining. However, in the real-world, graphs can be both large - with many complex patterns - and noisy which can pose a problem for effective graph mining. An effective way to deal with this issue is to incorporate "attention" into graph mining solutions. An attention mechanism allows a method to focus on task-relevant parts of the graph, helping it to make better decisions. In this work, we conduct a comprehensive and focused survey of the literature on the emerging field of graph attention models. We introduce three intuitive taxonomies to group existing work. These are based on problem setting (type of input and output), the type of attention mechanism used, and the task (e.g., graph classification, link prediction, etc.). We motivate our taxonomies through detailed examples and use each to survey competing approaches from a unique standpoint. Finally, we highlight several challenges in the area and discuss promising directions for future work.

    [1807.08372] Knowledge-based Transfer Learning Explanation
    https://arxiv.org/abs/1807.08372

    [1807.08596] Recent Advances in Convolutional Neural Network Acceleration
    https://arxiv.org/abs/1807.08596

    [1807.08725] Scalable Tensor Completion with Nonconvex Regularization
    https://arxiv.org/abs/1807.08725

    [1807.08058] Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning
    https://arxiv.org/abs/1807.08058

    [1807.08237] Learning Deep Hidden Nonlinear Dynamics from Aggregate Data
    https://arxiv.org/abs/1807.08237

    [1807.07963] Deep Transfer Learning for Cross-domain Activity Recognition
    https://arxiv.org/abs/1807.07963

    [1807.07987] Deep Learning
    https://arxiv.org/abs/1807.07987

    [1807.08582] Person Search by Multi-Scale Matching
    https://arxiv.org/abs/1807.08582

    [1807.08526] Improving Deep Models of Person Re-identification for Cross-Dataset Usage
    https://arxiv.org/abs/1807.08526

    [1807.08479] Domain Generalization via Conditional Invariant Representation
    https://arxiv.org/abs/1807.08479

    [1807.08291] Correlation Net : spatio temporal multimodal deep learning
    https://arxiv.org/abs/1807.08291

    [1807.08725] Scalable Tensor Completion with Nonconvex Regularization
    https://arxiv.org/abs/1807.08725

    [1807.08446] Minimizing Sum of Non-Convex but Piecewise log-Lipschitz Functions using Coresets
    https://arxiv.org/abs/1807.08446

    [1807.08409] Subsampling MCMC - A review for the survey statistician
    https://arxiv.org/abs/1807.08409

    [1807.08237] Learning Deep Hidden Nonlinear Dynamics from Aggregate Data
    https://arxiv.org/abs/1807.08237

    [1807.08207] Predicting purchasing intent: Automatic Feature Learning using Recurrent Neural Networks
    https://arxiv.org/abs/1807.08207

    [1807.08169] Recent Advances in Deep Learning: An Overview
    https://arxiv.org/abs/1807.08169

    论文笔记之Learning Convolutional Neural Networks for Graphs - CSDN博客
    https://blog.csdn.net/bvl10101111/article/details/53484620

    智能立方
    https://mp.weixin.qq.com/s/a8xW33fff7oQGOMNJc99GA

    [1807.07868] The Deep Kernelized Autoencoder
    https://arxiv.org/abs/1807.07868

    [1807.07645] Distributed approximation algorithms for maximum matching in graphs and hypergraphs
    https://arxiv.org/abs/1807.07645

    [1807.07640] Coloring in Graph Streams
    https://arxiv.org/abs/1807.07640

    [1807.07619] Generalized Metric Repair on Graphs
    https://arxiv.org/abs/1807.07619

    [1807.07612] Adaptive Variational Particle Filtering in Non-stationary Environments
    https://arxiv.org/abs/1807.07612

    [1807.07868] The Deep Kernelized Autoencoder
    https://arxiv.org/abs/1807.07868

    [1807.07789] Escaping the Curse of Dimensionality in Similarity Learning: Efficient Frank-Wolfe Algorithm and Generalization Bounds
    https://arxiv.org/abs/1807.07789

    [1807.07627] Rapid Time Series Prediction with a Hardware-Based Reservoir Computer
    https://arxiv.org/abs/1807.07627

    [1807.07612] Adaptive Variational Particle Filtering in Non-stationary Environments
    https://arxiv.org/abs/1807.07612

    [1807.07801] Finding Structure in Dynamic Networks
    https://arxiv.org/abs/1807.07801

    [1807.08046] A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear Structure in Convex Problems
    https://arxiv.org/abs/1807.08046

    By reducing optimization to a sequence of smaller subproblems, working set algorithms achieve fast convergence times for many machine learning problems. Despite such performance, working set implementations often resort to heuristics to determine subproblem size, makeup, and stopping criteria. We propose BlitzWS, a working set algorithm with useful theoretical guarantees. Our theory relates subproblem size and stopping criteria to the amount of progress during each iteration. This result motivates strategies for optimizing algorithmic parameters and discarding irrelevant components as BlitzWS progresses toward a solution. BlitzWS applies to many convex problems, including training L1-regularized models and support vector machines. We showcase this versatility with empirical comparisons, which demonstrate BlitzWS is indeed a fast algorithm.

    [1807.08140] On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
    https://arxiv.org/abs/1807.08140

    Theoretical analysis of the error landscape of deep neural networks has garnered significant interest in recent years. In this work, we theoretically study the importance of noise in the trajectories of gradient descent towards optimal solutions in multi-layer neural networks. We show that adding noise (in different ways) to a neural network while training increases the rank of the product of weight matrices of a multi-layer linear neural network. We thus study how adding noise can assist reaching a global optimum when the product matrix is full-rank (under certain conditions). We establish theoretical foundations between the noise induced into the neural network - either to the gradient, to the architecture, or to the input/output to a neural network - and the rank of product of weight matrices. We corroborate our theoretical findings with empirical results.

    [1807.07801] Finding Structure in Dynamic Networks
    https://arxiv.org/abs/1807.07801

    This document is the first part of the author's habilitation thesis (HDR), defended on June 4, 2018 at the University of Bordeaux. Given the nature of this document, the contributions that involve the author have been emphasized; however, these four chapters were specifically written for distribution to a larger audience. We hope they can serve as a broad introduction to the domain of highly dynamic networks, with a focus on temporal graph concepts and their interaction with distributed computing.

    相关文章

      网友评论

          本文标题:2018-07-24

          本文链接:https://www.haomeiwen.com/subject/ltaemftx.html