It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 505
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 75
- Azimuth Code Project 111
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 719

Options

Started work on a page on Bilinear regression.

## Comments

Interesting. What's the relationship between bilinear regression and the "error in variables" problem?

See http://azimuth.ch.mm.st/BayesianRegressionWithErrorsInVariables--JGalkowski.pdf

`Interesting. What's the relationship between bilinear regression and the "error in variables" problem? See http://azimuth.ch.mm.st/BayesianRegressionWithErrorsInVariables--JGalkowski.pdf`

Hi Jan, I can't access that link, but I found this on wikipeida. Assuming it's talking about the same thing, I think they're talking about two different problems. One of the motivations in the papers I've found for bilinear regression is providing a way to enforce sparsity in the model: if each sample is an $A \times B$ matrix,then simply flattening it to a vector and doing linear regression gives a model with $AB$ coefficients, whereas bilinear regression with $m$ pairs of $(u_i,v_i)$ has $m (A+B)$ coefficients. This reduction has been obtained by some loss of flexibility, but the reports I've read seem to indicated that overall it does well at preventing over-fitting. If there is a connection I'd be very interested to know more about it.

I'm going to have a go at using bilinear regression on the El Nino dataset, so the entry is partly just a place to store my calculations of the derivatives -- which are done for the way more complicated logistic regression model in the papers I've found.

`Hi Jan, I can't access that link, but I found this on [wikipeida](http://en.wikipedia.org/wiki/Errors-in-variables_models). Assuming it's talking about the same thing, I think they're talking about two different problems. One of the motivations in the papers I've found for bilinear regression is providing a way to enforce sparsity in the model: if each sample is an $A \times B$ matrix,then simply flattening it to a vector and doing linear regression gives a model with $AB$ coefficients, whereas bilinear regression with $m$ pairs of $(u_i,v_i)$ has $m (A+B)$ coefficients. This reduction has been obtained by some loss of flexibility, but the reports I've read seem to indicated that overall it does well at preventing over-fitting. If there is a connection I'd be very interested to know more about it. I'm going to have a go at using bilinear regression on the El Nino dataset, so the entry is partly just a place to store my calculations of the derivatives -- which are done for the way more complicated logistic regression model in the papers I've found.`

If you send your email address to me at empirical_bayesian -at- ieee -dot- org happy to send you a copy. Mystery to me why you can't access. I could put it in Google.

And I've queued-up K. R. Gabriel, "Generalised bilinear regression",

Biometrika(1998),85, 3, 689-700, and M. Linder, R. Sundberg, "Second-order calibration: bilinear least squares regression and a simple alternative",Chemometric and Intelligent Laboratory Systems,42(1998), 159-178 to read ...`If you send your email address to me at empirical_bayesian -at- ieee -dot- org happy to send you a copy. Mystery to me why you can't access. I could put it in Google. And I've queued-up K. R. Gabriel, "Generalised bilinear regression", <i>Biometrika</i> (1998), <b>85</b>, 3, 689-700, and M. Linder, R. Sundberg, "Second-order calibration: bilinear least squares regression and a simple alternative", <i>Chemometric and Intelligent Laboratory Systems</i>, <b>42</b> (1998), 159-178 to read ...`

http://azimuth.ch.mm.st gives:

Directory Index Denied

This page is unable to be displayed because the website owner has disabled directory indexing for this site and has not provided an index.html file.

Please contact the website owner to find the correct URL.

If this is your site you may wish to add an index.html page with more useful information.

hth

`http://azimuth.ch.mm.st gives: Directory Index Denied This page is unable to be displayed because the website owner has disabled directory indexing for this site and has not provided an index.html file. Please contact the website owner to find the correct URL. If this is your site you may wish to add an index.html page with more useful information. hth`

I've been a bit lazy in putting in references, but one of the papers I've been looking at is Sparse Bilinear Logistic Regression. One thing I've found is that there seem to be several things that go under the name of bilinear regression, but it's the kind of formulation in that paper that I'm looking at.

I've sent you an email to give you my email address.

`I've been a bit lazy in putting in references, but one of the papers I've been looking at is [Sparse Bilinear Logistic Regression](ftp://ftp.math.ucla.edu/pub/camreport/cam14-12.pdf). One thing I've found is that there seem to be several things that go under the name of bilinear regression, but it's the kind of formulation in that paper that I'm looking at. I've sent you an email to give you my email address.`

http://azimuth.ch.mm.st/BayesianRegressionWithErrorsInVariables–JGalkowski.pdf

in Firefox gives

`http://azimuth.ch.mm.st/BayesianRegressionWithErrorsInVariables–JGalkowski.pdf in Firefox gives ~~~~ No page found We couldn't find a page for the link you visited. Please check that you have the correct link and try again. If you are the owner of this domain, you can setup a page here by creating a page/website in your account. ~~~~`

Suddenly figured why trying to get Jan's document didn't work: the forum has merged two dashes into an n-dash in the output, so pasting doesn't work. If you select "Source" and paste the link from there, it works and I can see the paper.

`Suddenly figured why trying to get Jan's document didn't work: the forum has merged two dashes into an n-dash in the output, so pasting doesn't work. If you select "Source" and paste the link from there, it works and I can see the paper.`

This seems like a relevant reference in Jan's paper:

] N. Cahill, A. C. Parnell, A. C. Kemp, B. P. Horton, “Modeling sea-level change using errors-in-variables integrated Gaussian processes”, 24th December 2013,

http://arxiv.org/pdf/1312.6761.pdf .

`This seems like a relevant reference in Jan's paper: ] N. Cahill, A. C. Parnell, A. C. Kemp, B. P. Horton, “Modeling sea-level change using errors-in-variables integrated Gaussian processes”, 24th December 2013, http://arxiv.org/pdf/1312.6761.pdf .`

Just to note I've been trying to figure out a way to implement the bilinear model in a not-too-wasteful way in an array language like Matlab or NumPy (in order to avoid writing low-level code myself), but it's not proving obvious to me how to do this.

`Just to note I've been trying to figure out a way to implement the bilinear model in a not-too-wasteful way in an array language like Matlab or NumPy (in order to avoid writing low-level code myself), but it's not proving obvious to me how to do this.`

Just for the record, here's some Matlab/Octave code for evaluating a set of bilinear regression coefficients. I'm not putting it on a more permanent place because it's so ugly, inefficient and is unbelievably slow even for smaller test examples when put into Octaves fminunc optimization function. I think I do need to think about writing some lower level code that will perform even remotely reasonably on the El Nino data.

`Just for the record, here's some Matlab/Octave code for evaluating a set of bilinear regression coefficients. I'm not putting it on a more permanent place because it's so ugly, inefficient and is unbelievably slow even for smaller test examples when put into Octaves fminunc optimization function. I think I do need to think about writing some lower level code that will perform even remotely reasonably on the El Nino data. ~~~~ function score=getScore(matrices,values,lambda,params) NO_VECS=3; vcs=reshape(params,64,NO_VECS); u=vcs(1:32,NO_VECS); v=vcs(33:64,NO_VECS); acc=0; for i = 1:size(matrices,3) sc=sum(sum(u .* (matrices(:,:,i) * v))); t=sc - values(i); acc = acc + t * t; end %use L_1/2 regularisation/sparsification weights acc=acc+lambda*sum(sqrt(abs(params))); score=acc; end ~~~~`