矩阵分解                        
                
                                
                        
                            人工神经网络                        
                
                                
                        
                            计算机科学                        
                
                                
                        
                            因式分解                        
                
                                
                        
                            基质(化学分析)                        
                
                                
                        
                            人工智能                        
                
                                
                        
                            算法                        
                
                                
                        
                            物理                        
                
                                
                        
                            化学                        
                
                                
                        
                            特征向量                        
                
                                
                        
                            量子力学                        
                
                                
                        
                            色谱法                        
                
                        
                    
            作者
            
                Gintare Karolina Dziugaite,Daniel M. Roy            
         
                    
            出处
            
                                    期刊:Cornell University - arXiv
                                                                        日期:2015-01-01
                                                                        被引量:155
                                
         
        
    
            
            标识
            
                                    DOI:10.48550/arxiv.1511.06443
                                    
                                
                                 
         
        
                
            摘要
            
            Data often comes in the form of an array or matrix. Matrix factorization techniques attempt to recover missing or corrupted entries by assuming that the matrix can be written as the product of two low-rank matrices. In other words, matrix factorization approximates the entries of the matrix by a simple, fixed function---namely, the inner product---acting on the latent feature vectors for the corresponding row and column. Here we consider replacing the inner product by an arbitrary function that we learn from the data at the same time as we learn the latent feature vectors. In particular, we replace the inner product by a multi-layer feed-forward neural network, and learn by alternating between optimizing the network for fixed latent features, and optimizing the latent features for a fixed network. The resulting approach---which we call neural network matrix factorization or NNMF, for short---dominates standard low-rank techniques on a suite of benchmark but is dominated by some recent proposals that take advantage of the graph features. Given the vast range of architectures, activation functions, regularizers, and optimization techniques that could be used within the NNMF framework, it seems likely the true potential of the approach has yet to be reached.
         
            
 
                 
                
                    
                    科研通智能强力驱动
Strongly Powered by AbleSci AI