The Berger-Tung inner and outer bounds give a approximation to the rate-distortion region in lossy distributed source coding. The regions are not tight in general. Their tightness is discussed throughout Chapter 12 of *Network Information Theory* by El Gamal and Kim (in particular Section 12.5), but even so I …

## All articles

## A Vector-Gaussian Quantizer using Spatial Tours

By representing a high-dimensional random variable in terms of its time-of-occurrence in a space-filling tour, we gain a systematic way of pruning out atypical regions of the observation space. This allows for efficient source coding since it excludes encodings that correspond to bogus observations.

This could lead to design of …

## Source Encoders as Channels

It is well known that a rate-distortion-optimal source encoder's output generally doesn't match its source's distribution. This can make some analyses a pain in the neck. For example, say you want to investigate the relationship between a signal that appears in a source, and that signal's appearance in an encoding …

## Some Useful Multivariate Gaussian Information Quantities

Many information quantities for multivariate normal distributions have nice closed forms. Their essential parts usually reduce to logs of minor determinant quotients so they combine nicely. Here's a list of them. All of them are somewhere in Adaptive Wireless Communications by Bliss and Govindasamy.

## Notation

- \([n]=\{1,\dots,n\}\)
- \((x_n …

## Combinatoral Iteration in Matlab

Often we need a to write a program that iterates over all combinations of \(k\) elements from some set (elements chosen without replacement). It isn't obvious how to efficiently traverse through all these sets, although if bitwise arithmetic is readily available one can use Gosper's Hack. This kind of problem …

## If M's components are only known up to a multiplicative factor then what is det(M)?

Say \(M\) is an \(n\times n\) complex matrix and we would like to learn its determinant. We observe another matrix \(\hat{M}\) whose components are each within some factor of \(M\), in other words \(\hat{M}_{ij}=c_{ij}\cdot M_{ij}\) and each \(c_{ij}\in (1- …

## Some Interaction Information Examples

Interaction information between three random variables has a much less immediate interpretation compared to other information quantities. This makes it more tricky to work with. An i.i. term \(I(X;Y;Z)\) between \(X,\ Y\) and \(Z\), could be either negative:

If we are trying to specify \(X\) with …

## Porting PostmarketOS to the Motorola Photon Q

PostmarketOS has a goal to make old cell phones useful past their shelf life. Right now their efforts are towards getting Alpine Linux to run on as many phones as possible. The majority their codebase right now is a tool called

`pmbootstrap`

that automates parts of porting Alpine. Porting is …## Point KL-Divergence is not Very Negative Very Often

If \(X\sim P\) then for any distribution \(Q\) it is unlikely that \(Q\) ascribes much greater density to \(X\)'s outcome than \(P\) does. In fact if \(P,Q\) have PDFs \(f_P, f_Q\), then:

\begin{align} \mathbb{P}(f_P(X)\leq c f_Q(X)) &= \int \mathbf{1}_{\{x …## Tails of Probability Distributions are Nowhere Dense

Lately I have been thinking about Kullback-Liebler divergence on probability distributions, also called the KL discriminant or relative entropy. It is often useful to think about this quantity as a distance, even though it doesn't behave much like one: it's not symmetric, it doesn't follow the triangle inequality, and even …

Page 1 / 2 »