Learning general sparse additive models ...
Document type :
Article dans une revue scientifique
Permalink :
Title :
Learning general sparse additive models from point queries in high dimensions
Author(s) :
Journal title :
Constructive Approximation
Publisher :
Springer Verlag
Publication date :
2019-01-28
ISSN :
0176-4276
Keyword(s) :
Sparse additive models
Sampling
Hash functions
Sparse recovery Mathematics
Sampling
Hash functions
Sparse recovery Mathematics
HAL domain(s) :
Mathématiques [math]
English abstract : [en]
We consider the problem of learning a $d$-variate function $f$ defined on the cube $[−1, 1]^d ⊂ R^d$ , where the algorithm is assumed to have black box access to samples of f within this domain. Denote $S_r ⊂ {[d] \\choose ...
Show more >We consider the problem of learning a $d$-variate function $f$ defined on the cube $[−1, 1]^d ⊂ R^d$ , where the algorithm is assumed to have black box access to samples of f within this domain. Denote $S_r ⊂ {[d] \\choose r} ; r = 1,. .. , r_0$ to be sets consisting of unknown $r$-wise interactions amongst the coordinate variables. We then focus on the setting where f has an additive structure, i.e., it can be represented as $f = \\sum_{j∈S1} φ_j + \\sum_{j∈S2} φ_j + · · · + \\sum_{j∈S_{r_0}} φ_j$, where each $φ_j ; j ∈ S_r$ is at most r-variate for $1 ≤ r ≤ r_0$. We derive randomized algorithms that query f at carefully constructed set of points, and exactly recover each $S_r$ with high probability. In contrary to the previous work, our analysis does not rely on numerical approximation of derivatives by finite order differences.Show less >
Show more >We consider the problem of learning a $d$-variate function $f$ defined on the cube $[−1, 1]^d ⊂ R^d$ , where the algorithm is assumed to have black box access to samples of f within this domain. Denote $S_r ⊂ {[d] \\choose r} ; r = 1,. .. , r_0$ to be sets consisting of unknown $r$-wise interactions amongst the coordinate variables. We then focus on the setting where f has an additive structure, i.e., it can be represented as $f = \\sum_{j∈S1} φ_j + \\sum_{j∈S2} φ_j + · · · + \\sum_{j∈S_{r_0}} φ_j$, where each $φ_j ; j ∈ S_r$ is at most r-variate for $1 ≤ r ≤ r_0$. We derive randomized algorithms that query f at carefully constructed set of points, and exactly recover each $S_r$ with high probability. In contrary to the previous work, our analysis does not rely on numerical approximation of derivatives by finite order differences.Show less >
Language :
Anglais
Audience :
Internationale
Popular science :
Non
Submission date :
2020-06-08T14:11:39Z
2020-06-09T09:31:27Z
2020-06-09T09:31:27Z
Files
- documen
- Open access
- Access the document