miércoles, 24 de febrero de 2010

Isospectral Property of Hamiltonian Boundary Value Methods (HBVMs) and their connections with Runge-Kutta collocation methods. (arXiv:1002.4394v1 [math.NA])

Isospectral Property of Hamiltonian Boundary Value Methods (HBVMs) and their connections with Runge-Kutta collocation methods. (arXiv:1002.4394v1 [math.NA]): "

One main issue, when numerically integrating autonomous Hamiltonian systems,
is the long-term conservation of some of its invariants, among which the
Hamiltonian function itself. Recently, a new class of methods, named
Hamiltonian Boundary Value Methods (HBVMs) has been introduced and analysed,
which are able to exactly preserve polynomial Hamiltonians of arbitrarily high
degree. We here study a further property of such methods, namely that of
having, when cast as a Runge-Kutta method, a matrix of the Butcher tableau with
the same spectrum (apart from the zero eigenvalues) as that of the
corresponding Gauss-Legendre method, independently of the considered abscissae.
Consequently, HBVMs are always perfectly A-stable methods. This, in turn,
allows to elucidate the existing connections with classical Runge-Kutta
collocation methods.

"

FE-BE coupling for a transmission problem involving microstructure. (arXiv:1002.4385v1 [math.NA])

FE-BE coupling for a transmission problem involving microstructure. (arXiv:1002.4385v1 [math.NA]): "

We analyze a finite element/boundary element procedure to solve a non-convex
contact problem for the double-well potential. After relaxing the associated
functional, the degenerate minimization problem is reduced to a boundary/domain
variational inequality, a discretized saddle point formulation of which may
then be solved numerically. The convergence of the Galerkin approximations to
certain macroscopic quantities and a corresponding a posteriori estimate for
the approximation error are discussed.

"

Computing the R of the QR factorization of tall and skinny matrices using MPI_Reduce. (arXiv:1002.4250v1 [math.NA])

Computing the R of the QR factorization of tall and skinny matrices using MPI_Reduce. (arXiv:1002.4250v1 [math.NA]): "

A QR factorization of a tall and skinny matrix with n columns can be
represented as a reduction. The operation used along the reduction tree has in
input two n-by-n upper triangular matrices and in output an n-by-n upper
triangular matrix which is defined as the R factor of the two input matrices
stacked the one on top of the other. This operation is binary, associative, and
commutative. We can therefore leverage the MPI library capabilities by using
user-defined MPI operations and MPI_Reduce to perform this reduction. The
resulting code is compact and portable. In this context, the user relies on the
MPI library to select a reduction tree appropriate for the underlying
architecture.

"

Sparse Channel Separation using Random Probes. (arXiv:1002.4222v1 [math.NA])

Sparse Channel Separation using Random Probes. (arXiv:1002.4222v1 [math.NA]): "

This paper considers the problem of estimating the channel response (or
Green's function) between multiple source-receiver pairs. Typically, the
channel responses are estimated one-at-a-time: a single source sends out a
known probe signal, the receiver measures the probe signal convolved with the
channel response, and the responses are recovered using deconvolution. In this
paper, we show that if the channel responses are sparse and the probe signals
are random, then we can significantly reduce the total amount of time required
to probe the channels by activating all of the sources simultaneously. With all
sources activated simultaneously, the receiver measures a superposition of all
the channel responses convolved with the respective probe signals. Separating
this cumulative response into individual channel responses can be posed as a
linear inverse problem.


We show that channel response separation is possible (and stable) even when
the probing signals are relatively short in spite of the corresponding linear
system of equations becoming severely underdetermined. We derive a theoretical
lower bound on the length of the source signals that guarantees that this
separation is possible with high probability. The bound is derived by putting
the problem in the context of finding a sparse solution to an underdetermined
system of equations, and then using mathematical tools from the theory of
compressive sensing. Finally, we discuss some practical applications of these
results, which include forward modeling for seismic imaging, channel
equalization in multiple-input multiple-output communication, and increasing
the field-of-view in an imaging system by using coded apertures.

"

Finite sections of band-dominated operators on discrete groups. (arXiv:1002.4258v1 [math.NA])

Finite sections of band-dominated operators on discrete groups. (arXiv:1002.4258v1 [math.NA]): "

Let $\Gamma$ be a finitely generated discrete exact group. We consider
operators on $l^2(\Gamma)$ which are composed by operators of multiplication by
a function in $l^\infty (\Gamma)$ and by the operators of left-shift by
elements of $\Gamma$. These operators generate a $C^*$-subalgebra of
$L(l^2(\Gamma))$ the elements of which we call band-dominated operators on
$\Gamma$. We study the stability of the finite sections method for
band-dominated operators with respect to a given generating system of $\Gamma$.
Our approach is based on the equivalence of the stability of a sequence and the
Fredholmness of an associated operator, and on Roe's criterion for the
Fredholmness of a band-dominated operator on a exact discrete group, which we
formulate in terms of limit operators. Special emphasis is paid to the
quasicommutator ideal of the algebra generated by the finite sections sequences
and to the stability of sequences in that algebra. For both problems, the
sequence of the discrete boundaries plays an essential role.

"

A Finite Element Method for Density Estimation with Gaussian Process Priors

A Finite Element Method for Density Estimation with Gaussian Process Priors: "Michael Griebel and Markus Hegland

A variational problem characterizing the density estimator defined by the maximum a posteriori method with Gaussian process priors is derived. It is shown that this problem is well posed and can be solved with Newton's method. Numerically, the solution is approximated by a Galerkin/finite element m ... [SIAM J. Numer. Anal. 47, 4759 (2010)] published Wed Feb 24, 2010."

martes, 23 de febrero de 2010

Convergence of a split-step Hermite method for the Gross-Pitaevskii equation

Convergence of a split-step Hermite method for the Gross-Pitaevskii equation: "

An error analysis is given for a discretization of the Gross–Pitaevskii equation by Strang splitting in time and Hermite collocation in space. A second-order error bound in L2 for the semidiscretization error of the Strang splitting in time is proven under suitable regularity assumptions on the exact solution. For the semidiscretization in space, high-order convergence is shown, depending on the regularity of the exact solution. The analyses of the semidiscretizations in time and space are finally combined into an error analysis of the fully discrete method.

"

Towards an Efficient Tile Matrix Inversion of Symmetric Positive Definite Matrices on Multicore Architectures. (arXiv:1002.4057v1 [cs.MS])

Towards an Efficient Tile Matrix Inversion of Symmetric Positive Definite Matrices on Multicore Architectures. (arXiv:1002.4057v1 [cs.MS]): "

The algorithms in the current sequential numerical linear algebra libraries
(e.g. LAPACK) do not parallelize well on multicore architectures. A new family
of algorithms, the tile algorithms, has recently been introduced. Previous
research has shown that it is possible to write efficient and scalable tile
algorithms for performing a Cholesky factorization, a (pseudo) LU
factorization, and a QR factorization. In this extended abstract, we attack the
problem of the computation of the inverse of a symmetric positive definite
matrix. We observe that, using a dynamic task scheduler, it is relatively
painless to translate existing LAPACK code to obtain a ready-to-be-executed
tile algorithm. However we demonstrate that non trivial compiler techniques
(array renaming, loop reversal and pipelining) need then to be applied to
further increase the parallelism of our application. We present preliminary
experimental results.

"

A Tailored Finite Point Method for Convection-Diffusion-Reaction Problems

A Tailored Finite Point Method for Convection-Diffusion-Reaction Problems: "

Abstract
We study a tailored finite point method (TFPM) for solving the convection-diffusion-reaction equation. The solution basis
functions for the TFPM are constructed for a 5 point, 7 point and 9 point stencil. Some truncation error calculations are
given. Numerical tests are given on problems containing a boundary or interior layer. The tests compare TFPM with several
versions of a Petrov-Galerkin finite element schemes, and suggest that TFPM gives a superior resolution of the layers.


  • Content Type Journal Article
  • DOI 10.1007/s10915-010-9354-5
  • Authors

    • Yintzer Shih, National Chung Hsing University Department of Applied Mathematics 250 Kuo-Kuang Road Taichung 40227 Taiwan
    • R. Bruce Kellogg, University of South Carolina Department of Mathematics Columbia SC 29208 USA
    • Peishan Tsai, National Chung Hsing University Department of Applied Mathematics 250 Kuo-Kuang Road Taichung 40227 Taiwan


"

Spatial discretization of restricted group algebras. (arXiv:1002.4104v1 [math.OA])

Spatial discretization of restricted group algebras. (arXiv:1002.4104v1 [math.OA]): "

We consider spatial discretizations by the finite section method of the
restricted group algebra of a finitely generated discrete group, which is
represented as a concrete operator algebra via its left-regular representation.
Special emphasis is paid to the quasicommutator ideal of the algebra generated
by the finite sections sequences and to the stability of sequences in that
algebra. For both problems, the sequence of the discrete boundaries plays an
essential role. Finally, for commutative groups and for free non-commutative
groups, the algebras of the finite sections sequences are shown to be fractal.

"

An iterative scheme for solving equations with locally $\sigma$-inverse monotone operators. (arXiv:1002.4165v1 [math.NA])

An iterative scheme for solving equations with locally $\sigma$-inverse monotone operators. (arXiv:1002.4165v1 [math.NA]): "

An iterative scheme for solving ill-posed nonlinear equations with locally
$\sigma$-inverse monotone operators is studied in this paper. A stopping rule
of discrepancy type is proposed. The existence of $u_{n_\delta}$ satisfying the
proposed stopping rule is proved. The convergence of this element to the
minimal-norm solution is justified mathematically.

"

lunes, 22 de febrero de 2010

Analysis of a Quadratic Programming Decomposition Algorithm

Analysis of a Quadratic Programming Decomposition Algorithm: "G. Bencteux, E. Cances, W. W. Hager, and C. Le Bris

We analyze a decomposition algorithm for minimizing a quadratic objective function, separable in $\mathbf{x}_1$ and $\mathbf{x}_2$, subject to the constraint that $\mathbf{x}_1$ and $\mathbf{x}_2$ are orthogonal vectors on the unit sphere. Our algorithm consists of a local step where we minimize th ... [SIAM J. Numer. Anal. 47, 4517 (2010)] published Wed Feb 17, 2010."

A Convergent Nonconforming Adaptive Finite Element Method with Quasi-Optimal Complexity

A Convergent Nonconforming Adaptive Finite Element Method with Quasi-Optimal Complexity: "Roland Becker, Shipeng Mao, and Zhongci Shi

In this paper, we prove convergence and quasi-optimal complexity of a simple adaptive nonconforming finite element method. In each step of the algorithm, the iterative solution of the discrete system is controlled by an adaptive stopping criterion, and the local refinement is based on either a simp ... [SIAM J. Numer. Anal. 47, 4639 (2010)] published Fri Feb 19, 2010."

Discrete-Ordinate Discontinuous Galerkin Methods for Solving the Radiative Transfer Equation

Discrete-Ordinate Discontinuous Galerkin Methods for Solving the Radiative Transfer Equation: "Weimin Han, Jianguo Huang, and Joseph A. Eichholz

The radiative transfer equation (RTE) occurs in a wide variety of applications. In this paper, we study discrete-ordinate discontinuous Galerkin methods for solving the RTE. The numerical methods are formed in two steps. In the first step, the discrete ordinate technique is applied to discretize th ... [SIAM J. Sci. Comput. 32, 477 (2010)] published Wed Feb 17, 2010."

Algebraic Multigrid for Markov Chains

Algebraic Multigrid for Markov Chains: "H. De Sterck, T. A. Manteuffel, S. F. McCormick, K. Miller, J. Ruge et al.

An algebraic multigrid (AMG) method is presented for the calculation of the stationary probability vector of an irreducible Markov chain. The method is based on standard AMG for nonsingular linear systems, but in a multiplicative, adaptive setting. A modified AMG interpolation formula is proposed t ... [SIAM J. Sci. Comput. 32, 544 (2010)] published Wed Feb 17, 2010."

Accelerated A Posteriori Error Estimation for the Reduced Basis Method with Application to 3D Electromagnetic Scattering Problems

Accelerated A Posteriori Error Estimation for the Reduced Basis Method with Application to 3D Electromagnetic Scattering Problems: "Jan Pomplun and Frank Schmidt

We propose a new method for fast estimation of error bounds for outputs of interest in the reduced basis context, efficiently applicable to real world 3D problems. Geometric parameterizations of complicated 2D, or even simple 3D, structures easily leads to affine expansions consisting of a high num ... [SIAM J. Sci. Comput. 32, 498 (2010)] published Wed Feb 17, 2010."

Fast Evaluation of Volume Potentials in Boundary Element Methods

Fast Evaluation of Volume Potentials in Boundary Element Methods: "G. Of, O. Steinbach, and P. Urthaler

The solution of inhomogeneous partial differential equations by boundary element methods requires the evaluation of volume potentials. A direct standard computation of the classical Newton potentials is possible but expensive. Here, a fast evaluation of the Newton potentials by using the fast multi ... [SIAM J. Sci. Comput. 32, 585 (2010)] published Wed Feb 17, 2010."

Stanford mathematician: In reality, simulation is key to math education

Stanford mathematician: In reality, simulation is key to math education: "(PhysOrg.com) -- Role-playing games such as 'World of Warcraft' could reverse the declining math proficiency of middle school students, Keith Devlin told an audience at the AAAS annual meeting in San Diego."

Rate of convergence for a Galerkin scheme approximating a two-scale reaction-diffusion system with nonlinear transmission condition. (arXiv:1002.3793v1 [math.NA])

Rate of convergence for a Galerkin scheme approximating a two-scale reaction-diffusion system with nonlinear transmission condition. (arXiv:1002.3793v1 [math.NA]): "

We study a two-scale reaction-diffusion system with nonlinear reaction terms
and a nonlinear transmission condition (remotely ressembling Henry's law) posed
at air-liquid interfaces. We prove the rate of convergence of the two-scale
Galerkin method proposed in Muntean & Neuss-Radu (2009) for approximating this
system in the case when both the microstructure and macroscopic domain are
two-dimensional. The main difficulty is created by the presence of a boundary
nonlinear term entering the transmission condition. Besides using the
particular two-scale structure of the system, the ingredients of the proof
include two-scale interpolation-error estimates, an interpolation-trace
inequality, and improved regularity estimates.

"

domingo, 21 de febrero de 2010

Factorization of Non-Commutative Polynomials. (arXiv:1002.3180v1 [cs.MS])

Factorization of Non-Commutative Polynomials. (arXiv:1002.3180v1 [cs.MS]): "

We describe an algorithm for the factorization of non-commutative polynomials
over a field. The first sketch of this algorithm appeared in an unpublished
manuscript (literally hand written notes) by James H. Davenport more than 20
years ago. This version of the algorithm contains some improvements with
respect to the original sketch. An improved version of the algorithm has been
fully implemented in the Axiom computer algebra system.

"

Equal--area method for scalar conservation laws. (arXiv:1002.3260v1 [math.NA])

Equal--area method for scalar conservation laws. (arXiv:1002.3260v1 [math.NA]): "

We study one-dimensional conservation law. We develop a simple numerical
method for computing the unique entropy admissible weak solution to the initial
problem. The method basis on the equal-area principle and gives the solution
for given time directly.

"

Max-relaxation iteration procedure for building of Barabanov norms: convergence and examples. (arXiv:1002.3251v1 [math.RA])

Max-relaxation iteration procedure for building of Barabanov norms: convergence and examples. (arXiv:1002.3251v1 [math.RA]): "

The problem of construction of Barabanov norms for analysis of properties of
the joint (generalized) spectral radius of matrix sets has been discussed in a
number of publications. In previous papers of the author the method of
Barabanov norms was the key instrument in disproving the Lagarias-Wang
Finiteness Conjecture. The related constructions were essentially based on the
study of the geometrical properties of the unit balls of some specific
Barabanov norms. In this context the situation when one fails to find among
current publications any detailed analysis of the geometrical properties of the
unit balls of Barabanov norms looks a bit paradoxical. Partially this is
explained by the fact that Barabanov norms are defined nonconstructively, by an
implicit procedure. So, even in simplest cases it is very difficult to
visualize the shape of their unit balls. The present work may be treated as the
first step to make up this deficiency. In the paper an iteration procedure is
considered that allows to build numerically Barabanov norms for the irreducible
matrix sets and simultaneously to compute the joint spectral radius of these
sets.

"

Creating materials with a desired refraction coefficient: numerical experiments. (arXiv:1002.3533v1 [math.NA])

Creating materials with a desired refraction coefficient: numerical experiments. (arXiv:1002.3533v1 [math.NA]): "

A recipe for creating materials with a desired refraction coefficient is
implemented numerically. The following assumptions are used: \bee
\zeta_m=h(x_m)/a^\kappa,\quad d=O(a^{(2-\kappa)/3}),\quad
M=O(1/a^{2-\kappa}),\quad \kappa\in(0,1), \eee where $\zeta_m$ and $x_m$ are
the boundary impedance and center of the $m$-th ball, respectively, $h(x)\in
C(D)$, Im$h(x)\leq 0$, $M$ is the number of small balls embedded in the cube
$D$, $a$ is the radius of the small balls and $d$ is the distance between the
neighboring balls.


An error estimate is given for the approximate solution of the many-body
scattering problem in the case of small scatterers. This result is used for the
estimate of the minimal number of small particles to be embedded in a given
domain $D$ in order to get a material whose refraction coefficient approximates
the desired one with the relative error not exceeding a desired small quantity.

"

jueves, 18 de febrero de 2010

Sobre la historia del algoritmo PageRank de Google y sobre las publicaciones de los informáticos

Sobre la historia del algoritmo PageRank de Google y sobre las publicaciones de los informáticos: "

En este blog ya hablamos de los orígenes del algoritmo PageRank utilizado por Sergey Brin y Larry Page para Google en “La historia oculta detrás del algoritmo PageRank de Google (o Keller, Keener, Page, Brin y Kleinberg),” 21 Octubre 2008, que sé que interesó a muchos de los lectores de este blog.


Massimo Franceschet ha estudiado la historia de este algoritmo en detalle y ha encontrado sus orígenes en la sociología y la economía en su artículo “PageRank: Stand on the shoulders of giants,” ArXiv, 15 Feb 2010. Los interesados en un resumen breve de la historia pueden recurrir a KentuckyFC, “Scientist Finds PageRank-Type Algorithm from the 1940s,” the physics ArXiv Blog, February 17, 2010. Este artículo no podía pasar desapercibido a muchos por lo que mezvan ya lo ha meneado como “Los orígenes del famoso algoritmo PageRank se remontan a 1941,” donde nos dice que “En 1941, Wassily Leontief publicó un documento en el que se divide la economía de un país en dos sectores que la ofertaban y demandaban recursos entre sí, aunque no en igual medida. Surgió la pregunta: ¿cuál es el valor de cada sector, al estar tan estrechamente integrados? La respuesta de Leontief fue desarrollar un método iterativo de valoración de cada sector sobre la base de la importancia de los sectores que abastecen. ¿Suena familiar? En 1973, Leontief fue galardonado con el Premio Nobel de Economía por este trabajo …


BTW (por cierto), yo leí a KentuckyFC ayer por la tarde y pensé en menear el artículo que seguramente llegaría a portada (y ha llegado), pero soy incapaz de conectarme a Menéame, por alguna razón han dado de baja a mi usuario y el sistema recuperación de claves me envía un correo electrónico con un enlace que sigo y me lleva a la parte pública de mi página, pero no me permite cambiar la clave. Por ello no tengo acceso… no sé si le habrá pasado a alguien más. No puedo comentar las noticias y sólo puedo votar algunas de forma Anónima… Seguramente acabaré creando una cuenta nueva…


Pero vayamos al grano. Franceschet ha publicado artículos muy interesantes sobre bibliometría, sobre todo para los informáticos.


Massimo Franceschet, “The role of conference publications in computer science: a bibliometric view,” January 20, 2010. “En informática, desde una perspectiva bibliométrica, la mejor estrategia para ganar impacto es publicar pocas contribuciones de gran calidad en revistas indexadas, en lugar de muchos trabajos prematuros (“publishing quarks“) en conferencias internacionales.” La conclusión puede parecer obvia pero no lo es. En España, en Informática mucha gente presume de sus publicaciones en Congresos Internacionales de Gran Prestigio y presume que publicar en muchos de ellos es mucho más difícil que publicar en muchas revistas. Para llegar a su conclusión Massimo ha realizado un análisis bibliométrico de la información bibliográfica en DBLP (que incluye tanto revistas como conferencias internacionales). Su estudio ha mostrado que en media, un artículo en una revista es citado 5,41 veces, mientras que un artículo en una conferencia sólo 0,71 veces. Os dejo las conclusiones en inglés, porque sé que a los informáticos os gusta leer estas cosas en inglés… aunque sea un tirón de orejas.


CONCLUSIONS: (i) computer scientists publish more in conference proceedings than in archival journals; (ii) the impact of journal publications is significantly higher than the impact of conference papers. The take-home message for the computer science community might be the following: while it is harder to get published in journals, the effort is ultimately rewarded with a higher impact. From a bibliometric perspective, the best strategy to gain impact seems to be that of publishing few, final, and well-polished contributions in archival journals, instead of many premature ‘publishing quarks’ in conference proceedings.


Eres investigador, tienes un artículo “maravilloso” y quieres que sea publico. ¿Qué debes buscar una revista de prestigio o una de fama (popularidad)? ¿No lo es mismo prestigio y fama? Parece una “chorrada” pero la bibliometría, entre otros objetivos, tiene por obligación resolver este tipo de cuestiones y Massimo Franceschet recoge el testigo en “The difference between popularity and prestige in the sciences and in the social sciences: a bibliometric analysis,” Preprint submitted to Elsevier January 18, 2010. La popularidad de una revista internacional se mide por el número de sus citas y su índice de impacto, pero el prestigio requiere un cálculo más complicado, similar al uso de un algoritmo de tipo PageRank de Google (Massimo es “amante” del eigenfactor). El estudio de Massimo demuestra que prestigio=fama en muchos campos, como las Geociencias, Biología, Medicina y Ciencias Sociales, pero no en todos, diferenciándose en campos como la Física, la Ingeniería, las Ciencia de los Materiales y la Informática. Según su estudio las revistas se pueden clasificar en cuatro categorías:


1. revistas prestigiosas y populares; reciben muchas citas y son citadas por otras revistas prestigiosas.


2. revistas que ni son prestigiosas ni son populares; reciben pocas citas y éstas provienen de revistas “oscuras.”


3. revistas que son populares pero no son prestigiosas; tienen un alto número de citas por artículo, pero la mayoría provienen de revistas de poco prestigio. Estas revistas no están necesariamente muy citadas.


4. revistas que son prestigiosas pero poco populares; reciben pocas citas comparado con el número de artículos que publican pero las reciben desde revistas muy prestigiosas. Estas revistas no están necesariamente poco citadas.


Nadie tiene dudas respecto a las revistas en las categorías 1 y 2, pero el status de las revistas en las categorías 3 y 4 es muy controvertido. Massimo recomienda que para comparar revistas en estas dos últimas categorías, el eigenfactor es el mejor índice bibliométrico.


Finalmente, si eres informático, te recomiendo ”The skewness of computer science,” ArXiv, last revised 15 Feb 2010, donde Massimo afirma que “Computer science is a relatively young discipline combining science, engineering, and mathematics. (…) In the computer science publication culture, conferences are an important vehicle to quickly move ideas, and journals often publish deeper versions of papers already presented at conferences. (…) The skewness in the distribution of mean citedness of different venues combines with the asymmetry in citedness of articles in each venue, resulting in a highly asymmetric citation distribution with a power law tail. Furthermore, the skewness of conference publications is more pronounced than the asymmetry of journal papers. Finally, the impact of journal papers, as measured with bibliometric indicators, largely dominates that of proceeding papers.” Digo yo que los informáticos tendrán que aplicarse el “parche” y tener en cuenta estos estudios…


Archivado bajo:Bibliometría, Ciencia, Factor de impacto (Impact factor), Historia, Informática, Noticias, Science Tagged: índice de impacto, Bibliometría, Ciencia, curiosidades, economía, Historia, Informática, Noticias "

miércoles, 17 de febrero de 2010

A mollified Ensemble Kalman filter. (arXiv:1002.3091v1 [math.NA])

A mollified Ensemble Kalman filter. (arXiv:1002.3091v1 [math.NA]): "

It is well recognized that discontinuous analysis increments of sequential
data assimilation systems, such as ensemble Kalman filters, might lead to
spurious high frequency adjustment processes in the model dynamics. Various
methods have been devised to continuously spread out the analysis increments
over a fixed time interval centered about analysis time. Among these techniques
are nudging and incremental analysis updates (IAU). Here we propose another
alternative, which may be viewed as a hybrid of nudging and IAU and which
arises naturally from a recently proposed continuous formulation of the
ensemble Kalman analysis step. A new slow-fast extension of the popular
Lorenz-96 model is introduced to demonstrate the properties of the proposed
mollified ensemble Kalman filter.

"

A geometric approach to error estimates for conservation laws posed on a spacetime. (arXiv:1002.3137v1 [math.AP])

A geometric approach to error estimates for conservation laws posed on a spacetime. (arXiv:1002.3137v1 [math.AP]): "

We consider a hyperbolic conservation law posed on an (N+1)-dimensional
spacetime, whose flux is a field of differential forms of degree N.
Generalizing the classical Kuznetsov's method, we derive an L1 error estimate
which applies to a large class of approximate solutions. In particular, we
apply our main theorem and deal with two entropy solutions associated with
distinct flux fields, as well as with an entropy solution and an approximate
solution. Our framework encompasses, for instance, equations posed on a
globally hyperbolic Lorentzian manifold.

"

Kinetic relations for undercompressive shock waves. Physical, mathematical, and numerical issues. (arXiv:1002.2950v1 [math.AP])

Kinetic relations for undercompressive shock waves. Physical, mathematical, and numerical issues. (arXiv:1002.2950v1 [math.AP]): "

Kinetic relations are required in order to characterize nonclassical
undercompressive shock waves and formulate a well-posed initial value problem
for nonlinear hyperbolic systems of conservation laws. Such nonclassical waves
arise in weak solutions of a large variety of physical models: phase
transitions, thin liquid films, magnetohydrodynamics, Camassa-Holm model,
martensite-austenite materials, semi-conductors, combustion theory, etc. This
review presents the research done in the last fifteen years which led the
development of the theory of kinetic relations for undercompressive shocks and
has now covered many physical, mathematical, and numerical issues. The main
difficulty overcome here in our analysis of nonclassical entropy solutions
comes from their lack of monotonicity with respect to initial data.
Undercompressive shocks of hyperbolic conservation laws turn out to exhibit
features that are very similar to shocks of nonconservative hyperbolic systems,
who were investigated earlier by the author.

"

martes, 16 de febrero de 2010

Construction Of Difference Schemes For Nonlinear Singular Perturbed Equations By Approximation Of Coefficients. (arXiv:1002.2925v1 [math.NA])

Construction Of Difference Schemes For Nonlinear Singular Perturbed Equations By Approximation Of Coefficients. (arXiv:1002.2925v1 [math.NA]): "

Mathematical modeling of many physical processes such as diffusion, viscosity
of fluids and combustion involves differential equations with small
coefficients of higher derivatives. These may be small diffusion coefficients
for modeling the spreading of impurities, small coefficients of viscosity in
fluid flow simulation etc. The difficulty with solving such problem is that if
you set the small parameter at higher derivatives to zero, the solution of the
degenerate problem doesn't correctly approximate the original problem, even if
the small parameter approaches zero; the solution of the original problem
exhibits the emergency of a boundary layer. As a result, the application of
classical difference schemes for solving such equations produces great
inaccuracies. Therefore, numerical solution of differential equations with
small coefficients at higher derivatives demands special difference schemes
exhibiting uniform convergence with respect to the small parameters involved.
In this article author investigates two nonlinear boundary value problems on a
finite interval, resulting in exponential and power-law boundary layers.

"

Numerical comparisons between Gauss-Legendre methods and Hamiltonian BVMs defined over Gauss points. (arXiv:1002.2727v1 [math.NA])

Numerical comparisons between Gauss-Legendre methods and Hamiltonian BVMs defined over Gauss points. (arXiv:1002.2727v1 [math.NA]): "

Hamiltonian Boundary Value Methods are a new class of energy preserving one
step methods for the solution of polynomial Hamiltonian dynamical systems. They
can be thought of as a generalization of collocation methods in that they may
be defined by imposing a suitable set of extended collocation conditions. In
particular, in the way they are described in this note, they are related to
Gauss collocation methods with the difference that they are able to precisely
conserve the Hamiltonian function in the case where this is a polynomial of any
high degree in the momenta and in the generalized coordinates. A description of
these new formulas is followed by a few test problems showing how, in many
relevant situations, the precise conservation of the Hamiltonian is crucial to
simulate on a computer the correct behavior of the theoretical solutions.

"

The HBVMs Homepage. (arXiv:1002.2757v1 [math.NA])

The HBVMs Homepage. (arXiv:1002.2757v1 [math.NA]): "

Hamiltonian Boundary Value Methods (in short, HBVMs) is a new class of
numerical methods for the efficient numerical solution of canonical Hamiltonian
systems. In particular, their main feature is that of exactly preserving, for
the numerical solution, the value of the Hamiltonian function, when the latter
is a polynomial of arbitrarily high degree. Clearly, this fact implies a
practical conservation of any analytical Hamiltonian function. In this notes,
we collect the introductory material on HBVMs contained in the HBVMs Homepage,
available at this http URL

"

lunes, 15 de febrero de 2010

Non-Fickian delay reaction–diffusion equations: Theoretical and numerical study☆

Non-Fickian delay reaction–diffusion equations: Theoretical and numerical study☆: "Publication year: 2010
Source: Applied Numerical Mathematics, In Press, Accepted Manuscript, Available online 13 February 2010
J.R., Branco , J.A., Ferreira , P., da Silva
The Fisher's equation is established combining the Fick's law for the flux and the mass conservation law with a reaction term evaluated at the present time. If this term depends on the solution at some past time, a delay parameter is introduced and the delay Fisher's equation is obtained. Modifying the Fick's law for the flux considering a time memory term, integro-differential equations of Volterra type are established.In this paper we study reaction–diffusion equations obtained combining the two modifications: a time memory term in the flux and a delay parameter in the reaction term. The delay integro-differential equations also known..."

domingo, 14 de febrero de 2010

The mean field traveling salesman and related problems

The mean field traveling salesman and related problems: "

Abstract The edges of a complete graph on n vertices are assigned i.i.d. random costs from a distribution for which the interval [0, t] has probability asymptotic to t as t→0 through positive values. In this so called pseudo-dimension 1 mean field model, we study several optimization problems,
of which the traveling salesman is the best known. We prove that, as n→∞, the cost of the minimum traveling salesman tour converges in probability to a certain number, approximately 2.0415, which
is characterized analytically.


  • Content Type Journal Article
  • DOI 10.1007/s11511-010-0046-7
  • Authors

    • Johan Wästlund, Chalmers University of Technology Department of Mathematical Sciences SE-412 96 Gothenburg Sweden


"

Tiling Polygons with Lattice Triangles

Tiling Polygons with Lattice Triangles: "

Abstract Given a simple polygon with rational coordinates having one vertex at the origin and an adjacent vertex on the x-axis, we look at the problem of the location of the vertices for a tiling of the polygon using lattice triangles (i.e., triangles
which are congruent to a triangle with the coordinates of the vertices being integer). We show that the coordinates of the
vertices in any tiling are rationals with the possible denominators odd numbers dependent on the cotangents of the angles
in the triangles.


  • Content Type Journal Article
  • DOI 10.1007/s00454-010-9249-0
  • Authors

    • Steve Butler, University of California, Los Angeles Los Angeles CA USA
    • Fan Chung, University of California, San Diego San Diego CA USA
    • Ron Graham, University of California, San Diego San Diego CA USA
    • Miklós Laczkovich, Eötvös Loránd University Budapest Hungary


"

On Rich Lines in Grids

On Rich Lines in Grids: "

Abstract In this paper we show that if one has a grid A×B, where A and B are sets of n real numbers, then there can be only very few “rich” lines in certain quite small families. Indeed, we show that if the family
has lines taking on n

ε
distinct slopes, and where each line is parallel to n

ε
others (so, at least n
2ε
lines in total), then at least one of these lines must fail to be “rich”. This result immediately implies non-trivial sumproduct
inequalities; though, our proof makes use of the Szemeredi-Trotter inequality, which Elekes used in his argument for lower
bounds on |C+C|+|C.C|.


  • Content Type Journal Article
  • DOI 10.1007/s00454-010-9250-7
  • Authors

    • Evan Borenstein, Georgia Institute of Technology School of Mathematics 103 Skiles Atlanta GA 30332 USA
    • Ernie Croot, Georgia Institute of Technology School of Mathematics 103 Skiles Atlanta GA 30332 USA


"

On Lines and Joints

On Lines and Joints: "

Abstract Let L be a set of n lines in ℝ
d
, for d≥3. A joint of L is a point incident to at least d lines of L, not all in a common hyperplane. Using a very simple algebraic proof technique, we show that the maximum possible number
of joints of L is Θ(n

d/(d−1)
). For d=3, this is a considerable simplification of the original algebraic proof of Guth and Katz (Algebraic methods in discrete
analogs of the Kakeya problem, 4 December 2008, arXiv:0812.1043), and of the follow-up simpler proof of Elekes et al. (On lines, joints, and incidences in three dimensions. Manuscript,
11 May 2009, arXiv:0905.1583). Some extensions, e.g., to the case of joints of algebraic curves, are also presented.


  • Content Type Journal Article
  • DOI 10.1007/s00454-010-9246-3
  • Authors

    • Haim Kaplan, Tel Aviv University School of Computer Science Tel Aviv 69978 Israel
    • Micha Sharir, Tel Aviv University School of Computer Science Tel Aviv 69978 Israel
    • Eugenii Shustin, Tel Aviv University School of Mathematical Sciences Tel Aviv 69978 Israel


"

Generalized Jacobi Rational Spectral Method and Its Applications

Generalized Jacobi Rational Spectral Method and Its Applications: "

Abstract We introduce an orthogonal system on the whole line, induced by the generalized Jacobi functions. Some results on the generalized
Jacobi rational approximation are established, which play important roles in the related spectral methods. As examples of
applications, the rational spectral schemes are proposed for sine-Gordon, Klein-Gordon and Fisher equations, with the convergence
analysis. Numerical results demonstrate their efficiency.


  • Content Type Journal Article
  • DOI 10.1007/s10915-010-9353-6
  • Authors

    • Ben-Yu Guo, Shanghai Normal University, Scientific Computing Key Laboratory of Shanghai Universities, Division of Computational Science of E-institute of Shanghai Universities Department of Mathematics Shanghai 200234 China
    • Yong-Gang Yi, Tianhua College Department of Basic Courses Shanghai 201805 China


"

sábado, 13 de febrero de 2010

A Posteriori Error Estimation for a Finite Volume Discretization on Anisotropic Meshes

A Posteriori Error Estimation for a Finite Volume Discretization on Anisotropic Meshes: "

Abstract A singularly perturbed reaction-diffusion problem is considered. The small diffusion coefficient generically leads to solutions
with boundary layers. The problem is discretized by a vertex-centered finite volume method. The anisotropy of the solution
is reflected by using anisotropic meshes which can improve the accuracy of the discretization considerably. The main focus is on a posteriori error estimation. A residual type error estimator is proposed and rigorously analysed. It is shown to be robust with respect
to the small perturbation parameter. The estimator is also robust with respect to the mesh anisotropy as long as the anisotropic
mesh sufficiently reflects the anisotropy of the solution (which is almost always the case for sensible discretizations).
Altogether, reliable and efficient a posteriori error estimation is achieved for the finite volume method on anisotropic meshes. Numerical experiments in 2D underline the
applicability of the theoretical results in adaptive computations.


  • Content Type Journal Article
  • DOI 10.1007/s10915-010-9352-7
  • Authors

    • M. Afif, Université Cadi Ayyad Faculté des Sciences-Semlalia, Laboratoire LIBMA B.P. 2390 Marrakech Maroc
    • B. Amaziane, Université de Pau et des Pays de l’Adour Laboratoire de Mathématiques et de leurs Applications, CNRS-UMR 5142 av. de l’Université 64000 Pau France
    • G. Kunert, IAV GmbH 09120 Chemnitz Germany
    • Z. Mghazli, Université Ibn Tofaïl Faculté des Sciences, Laboratoire LIRNE-Equipe EIMA B.P. 133 Kénitra Maroc
    • S. Nicaise, Université Lille Nord de France, UVHC LAMAV, FR CNRS 2956 59313 Valenciennes Cedex 9 France


"

viernes, 12 de febrero de 2010

Discovery could lead to more difficult Sudoku puzzles

Discovery could lead to more difficult Sudoku puzzles: "(PhysOrg.com) -- A new analysis of number randomness in Sudoku matrices could lead to the development of more difficult and multi-dimensional Sudoku puzzles. In a recent study, mathematicians have found that the way that numbers are arranged in Sudoku puzzles is even more random than the number arrangements in randomly-generated matrices. The counter-intuitive discovery may enable researchers to develop algorithms that generate Sudoku matrices with fewer clues, making them more difficult to solve."

A First Principles Development of a General Anisotropic Potential for Polycyclic Aromatic Hydrocarbons

A First Principles Development of a General Anisotropic Potential for Polycyclic Aromatic Hydrocarbons: "Journal of Chemical Theory and Computation, Volume 0, Issue 0, Articles ASAP (As Soon As Publishable)."

Acidity of the Aqueous Rutile TiO2(110) Surface from Density Functional Theory Based Molecular Dynamics

Acidity of the Aqueous Rutile TiO2(110) Surface from Density Functional Theory Based Molecular Dynamics: "Journal of Chemical Theory and Computation, Volume 0, Issue 0, Articles ASAP (As Soon As Publishable)."

On the Solutions of Time-Fractional Reaction-Diffusion Equations

On the Solutions of Time-Fractional Reaction-Diffusion Equations: "Publication year: 2010
Source: Communications in Nonlinear Science and Numerical Simulation, In Press, Accepted Manuscript, Available online 12 February 2010
S.Z., Rida , A.M.A., El-Sayed , A.A.M., Arafa
In this paper, a new application of generalized differential transform method (GDTM) has been used for solving time-fractional reaction-diffusion equations. To illustrate the reliability of the method, some examples are provided."

Chaos in fractional conjugate Lorenz system and its scaling attractors

Chaos in fractional conjugate Lorenz system and its scaling attractors: "Publication year: 2010
Source: Communications in Nonlinear Science and Numerical Simulation, In Press, Accepted Manuscript, Available online 12 February 2010
Qigui, Yang , Caibin, Zeng
Chaotic dynamics of fractional conjugate Lorenz system are demonstrated in terms of local stability and largest Lyapunov exponent. Chaos does exist in the fractional conjugate Lorenz system with order less than 3 since it has positive largest Lyapunov exponent. Furthermore, scaling chaotic attractors of fractional conjugate Lorenz system is theoretically and numerically analyzed with the help of one-way synchronization method and adaptive synchronization method. Numerical simulations are performed to verify the theoretical analysis."

Generalized Bulgac-Kusnezov methods for sampling of the Gibbs-Boltzmann measure

Generalized Bulgac-Kusnezov methods for sampling of the Gibbs-Boltzmann measure: "

Author(s): Benedict Leimkuhler

A wide family of methods is described for sampling in the canonical ensemble. The Bulgac-Kusnezov method is generalized to include a more complicated coupling structure and stochastic perturbations. It is shown that a controlled fluctuation of the potential surface or force field in a molecular mode...

[Phys. Rev. E 81, 026703] Published Fri Feb 12, 2010

"

Smarter than Google?

Smarter than Google?: "Dutch mathematician Nicole Koenderink obtained her PhD yesterday. Her thesis involved a search engine that is smarter than Google. This machine taps into the knowledge of experts and poses questions in return."

A directed Monte Carlo solution of linear stochastic algebraic system of equations

A directed Monte Carlo solution of linear stochastic algebraic system of equations: "Publication year: 2010
Source: Finite Elements in Analysis and Design, In Press, Corrected Proof, Available online 12 February 2010
Y.T., Feng , C.F., Li , D.R.J., Owen
This paper proposes a modified Monte Carlo simulation method for the solution of a linear stochastic algebraic system of equations arising from the stochastic finite element modelling of linear elastic problems. The basic idea is to direct Monte Carlo samples along straight lines and then utilise their spatial proximity or order to provide high quality initial approximations in order to significantly accelerate the convergence of iterative solvers at each sample. The method, termed the directed Monte Carlo (DMC) simulation, is developed first for one random variable using the preconditioned conjugate gradient equipped with an initial approximation prediction scheme, and then..."

Un artículo de métodos numéricos publicado en Physical Review Letters

Un artículo de métodos numéricos publicado en Physical Review Letters: "


Los que trabajamos (investigamos) en métodos numéricos y somos físicos de formación tenemos una espinita clavada, publicar en Physical Review Letters. Siempre uno quiere creerse que hace física computacional y que algún día podrá publicar en PRL, pero ojeando los índices de artículos de esta revista cada semana… y sabiendo que publicar en PRL cada día es más difícil… Pero la esperanza es lo último que se pierde y ver un artículo de métodos numéricos en PRL le da a uno una gran alegría. Nunca hubiera creído que el artículo de los físicos Mark K. Transtrum, Benjamin B. Machta y James P. Sethna, de la Universidad de Cornell, Ithaca, New York, titulado “Why are Nonlinear Fits to Data so Challenging?,” que ya ojeé hace meses en ArXiv, y al que estuve a punto de dedicar una entrada en este blog, haya acabado siendo publicado en Physical Review Letters 104: 060201, 12 Feb. 2010. ¡Enhorabuena!


El artículo estudia el ajuste de los parámetros de un modelo a partir de datos experimentales. Este problema se puede resolver fácilmente con Mathematica, Matlab y cualquier otro software científico al uso. Sin embargo, conforme el número de parámetros crece las dificultades se acumulan y los métodos numéricos convencionales que utilizan dicohs programas encuentran dificultades de convergencia. Yo recuerdo una vez que estuve ajustando los parámetros de un modelo de la respiración de monos bajo estrés. Matlab (usando la variante del algoritmo de Levenberg-Marquardt desarrollado por Powell) necesitó varios días de trabajo (hace ya unos 10 años) y el resultado al final no me convenció. Ajustar, ajustaba, pero los valores de los parámetros eran poco “razonables” y no acabó de gustarme el resultado. Seguramente el problema tenía múltiples soluciones y había una solución que se me escapaba que ajustaba los datos y parecía “razonable”. La verdad sea dicha, tras cierto tiempo dedicado al asunto, preferí abandonarlo sin publicarlo. Los que sois investigadores sabéis que eso nos pasa muchas veces. Y cuando el tópico es colateral (yo estudio propagación de ondas en medios no lineales dispersivos), uno lo abandona sin remordimiento alguno.


El artículo de Transtrum et al. trata de explicar la razón por la que muchos algoritmos de optimización utilizados para ajustar parámetros de un modelo encuentran soluciones no razonables debido a que entran en una región del espacio de parámetros en la que el modelo no responde a cambios en dichos parámetros. La única opción para escapar de dichas regiones “malas” es ajustar a mano los parámetros (es decir, reparametrizar el problema con inteligencia e intuición). Los autores del artículo tratan de explicar este fenómeno aludiendo a la teoría de la interpolación para la aproximación de funciones, que subyace a todos los algoritmos de optimización. Transtrum et al. introducen una noción de curvatura (extrínseca) en el espacio de parámetros del modelo (una idea de geometría diferencial poco explotada en análisis numérico) que permite identificar estas regiones de búsqueda “malas” y la utilizan para mejorar la convergencia del algoritmo de Levenberg-Marquardt, permitiéndole salir de las mismas.


Los lectores de este blog que no hayan oido hablar del algoritmo de Levenberg-Marquardt y que nunca hayan optimizado los parámetros de un modelo a partir de datos experimentales seguramente no sabrán lo importante que es esta tarea numérica en la práctica diaria de los físicos experimentales, biólogos, economistas, etc. El mejor ajuste suele ser de tipo mínimocuadrático, se minimiza cierta función coste definida positiva, normalmente la suma de los cuadrados de las desviaciones entre el modelo y los datos experimentales. Muchos modelos tienen parámetros que dependen de forma no lineal, con lo que este proceso de minimización requiere métodos numéricos para la resolución de sistemas no lineales de ecuaciones. Encontrar la solución óptima es difícil porque el espacio de parámetros (posibles modelos) está repleto de mínimos locales que no corresponden al mejor ajuste posible (el mínimo global). Más aún, la función coste puede ser muy sensible a ciertos valores de los parámetros y muy poco sensible a otras. La idea de Transtrum et al. es que en el espacio de los parámetros hay una hipersuperficie (variedad diferencial) en la que se encuentra la solución óptima que puede pasar desapercibida al algoritmo de optimización porque tiene un grosor muy delgado para que la pueda resolver correctamente, le llaman “hipercinta” (hiper-ribbon). Para poder detectar la presencia de este tipo de “hipercintas” proponen el uso de una noción de curvatura basada en asumir que en cada paso el método numérico recorre una geodésica en el espacio de parámetros a cierta “velocidad” y “aceleración” geodésicas. La curvatura se calcula gracias a estas segundas derivadas y permite acelerar la convergencia de un método numérico (lo aplican sólo al algoritmo de Levenberg-Marquardt, LM).


¿Qué speedup obtienen con el nuevo algoritmo? Para el ejemplo que incluyen en el artículo (un problema de biología de sistemas basado en una red metabólica modelada con leyes de Michaelis-Menten) el algoritmo LM acelerado obtiene la respuesta óptima el 65% de las veces, cuando el algoritmo LM sin acelerar la obtiene sólo el 33%. ¡ Un gran logro ! Bueno, no tanto, calcular la solución en 2 horas en lugar de en 4 horas no parece un gran avance científico. Pero … me corroe la envidia.


¿Enviaré mi próximo artículo numérico a PRL a ver si cuela? No sé, no sé, me lo pensaré… pero tengo que venderlo bien ya que la respuesta que espero es que Physical Review Letters publica física no métodos numéricos. Bueno… casi siempre.


Archivado bajo:Ciencia "

High-order WENO scheme for Polymerization-type equations. (arXiv:1002.2174v1 [math.AP] CROSS LISTED)

High-order WENO scheme for Polymerization-type equations. (arXiv:1002.2174v1 [math.AP] CROSS LISTED): "

Polymerization of proteins is a biochimical process involved in different
diseases. Mathematically, it is generally modeled by
aggregation-fragmentation-type equations. In this paper we consider a general
polymerization model and propose a high-order numerical scheme to investigate
the behavior of the solution. An important property of the equation is the mass
conservation. The fifth-order WENO scheme is built to preserve the total mass
of proteins along time.

"

Applications of the Digital-Discrete Method in Smooth-Continuous Data Reconstruction. (arXiv:1002.2367v2 [math.NA])

Applications of the Digital-Discrete Method in Smooth-Continuous Data Reconstruction. (arXiv:1002.2367v2 [math.NA]): "

This paper presents some applications of using recently developed algorithms
for smooth-continuous data reconstruction based on the digital-discrete method.
The classical discrete method for data reconstruction is based on domain
decomposition according to guiding (or sample) points. Then uses Splines (for
polynomial) or finite elements method (for PDE) to fit the data. Our method is
based on the gradually varied function that does not assume the property of the
linearly separable among guiding points, i.e. no domain decomposition methods
are needed. We also demonstrate the flexibility of the new method and the
potential to solve variety of problems. The examples include some real data
from water well logs and harmonic functions on closed 2D manifolds. This paper
presented the results from six different algorithms. This method can be easily
extended to higher multi-dimensions.

"

jueves, 11 de febrero de 2010

Another Brick in the Wall




another brick in the wall..............

The Radon exchange graph

The Radon exchange graph: "Radon's theorem states that, for a set of d + 2 points in general position in d-dimensional space there is a (unique) partition of the set into two subsets whose convex hulls intersect in a point. Here general position means no d + 1 of the points lie on a common hyperplane. For instance, if the four points p, q, r, and s in the plane form a convex quadrilateral pqrs, then in the partition ps/rq the two line segments ps and rq (the diagonals of the quadrilateral) cross each other, while if point p lies inside triangle qrs then the partition p/qrs works, and has p as the point of intersection of the two hulls.

Now suppose that we have nd + 2 points, still in general position. Each (d + 2)-tuple of the points defines a Radon partition. Define two of these partitions A/B and C/D to be adjacent if they differ by the removal of one point from A or B, and the addition of a different point to the corresponding set C or D. This forms a 2(nd − 2)-regular graph that has as its vertices all possible (d + 2)-tuples and as its edges the adjacencies.

Why is this graph regular? Because, if A/B is any Radon partition of some d + 2 points of the set, and x is any point outside that (d + 2)-tuple, then there is a unique Radon partition formed by adding x to A and removing some other point from A or B, and there is another unique Radon partition formed by adding x to B and removing some other point from A or B. If A has fewer than d + 1 points, then the convex hulls of A+x and B intersect in a line segment; the Radon partition A/B has one endpoint of this line segment as its intersection point and the other endpoint is the intersection point of an adjacent partition. If, on the other hand, A has exactly d + 1 points, then A+x takes the form of a simplex divided into d + 1 smaller simplices; in this case, the outer simplex and exactly one of the inner simplices contain the sole point of B, and these two simplices form (with B) two adjacent partitions.

We can use this Radon exchange graph to prove a higher-dimensional relative of the Happy Ending problem first proven (using a different technique, Gale diagrams) by Micha Perles: for every set of d + 3 points in general position there is a subset of d + 2 that form the vertices of a neighborly polytope. A neighborly polytope is a polytope such that the convex hull of every subset of d/2 or fewer vertices forms a face; for instance, in four dimensions, every pair of vertices must form an edge. Perles' proof works for all dimensions (or at least, Grünbaum's book Convex Polytopes, where it appears as an exercise on p.120, doesn't say anything about it not working in all dimensions), whereas the proof below using the Radon exchange graph works only for even dimensions. However, it proves a slightly stronger result, that there are an odd number of ways of choosing d + 2 vertices to form a neighborly polytope.

Define a Radon partition to be k-unbalanced (for k < d/2 + 1) if one side of the partition has k or fewer points. Then, in the Radon exchange graph for any set of d + 3 points, the k-unbalanced partitions have a perfect matching. The proof is by induction on k: suppose by induction that the (k − 1)-unbalanced partitions are already perfectly matched, consider the partitions A/B where |A|=k, and for each such partition consider the edge to the adjacent partition formed by adding the missing point x to B and removing a point. The edges where the removed point comes from B form a matching among some of these partitions; but when the removed point comes from A, these edges go to the already-matched (k − 1)-unbalanced partitions. Each path in the Radon exchange graph formed by these already-matched partitions must have exactly two edges connecting it to the rest of the graph, both of which are among the set of edges being considered. By replacing alternating edges along each such path, we get a matching of all the k-unbalanced partitions.

The parity version of Perles' problem follows immediately: in any dimension d, and for any set of d + 3 points, the number of unbalanced partitions (for all choices of k) is even, and the total number of partitions is exactly d + 3, so the number of balanced partitions has the opposite parity from d. And in even dimensions a set of d + 2 points forms a neighborly polytope if and only if its Radon partition is balanced, so for even dimensions there is an odd number of ways of forming a neighborly polytope.

In all the examples I calculated, the Radon exchange graphs for sets of d + 3 points always took the form of a single (d + 3)-vertex cycle. But I don't know whether that's always true, or whether I just haven't looked at sufficiently many examples. If it is always a cycle, then these cycles could be used to define 2-faces of a higher-dimensional Radon exchange complex..."

Developing computational methods for three-dimensional finite element simulations of coronary blood flow

Developing computational methods for three-dimensional finite element simulations of coronary blood flow: "Publication year: 2010
Source: Finite Elements in Analysis and Design, In Press, Corrected Proof, Available online 10 February 2010
H.J., Kim , I.E., Vignon-Clementel , C.A., Figueroa , K.E., Jansen , C.A., Taylor
Coronary artery disease contributes to a third of global deaths, afflicting seventeen million individuals in the United States alone. To understand the role of hemodynamics in coronary artery disease and better predict the outcomes of interventions, computational simulations of blood flow can be used to quantify coronary flow and pressure realistically. In this study, we developed a method that predicts coronary flow and pressure of three-dimensional epicardial coronary arteries by representing the cardiovascular system using a hybrid numerical/analytic closed loop system comprising a three-dimensional model of the aorta, lumped parameter coronary vascular models to represent the coronary vascular networks, three-element..."

Numerical Modeling of Annular Laminar Film Condensation in Circular and Non-Circular Micro-Channels under Normal and Micro-Gravity

Numerical Modeling of Annular Laminar Film Condensation in Circular and Non-Circular Micro-Channels under Normal and Micro-Gravity

Quantum Monte Carlo study of the transcorrelated method for correlation factors

Quantum Monte Carlo study of the transcorrelated method for correlation factors

Applications of Screened Hybrid Density Functionals with Empirical Dispersion Corrections to Rare Gas Dimers and Solids

Applications of Screened Hybrid Density Functionals with Empirical Dispersion Corrections to Rare Gas Dimers and Solids: "Journal of Chemical Theory and Computation, Volume 0, Issue 0, Articles ASAP (As Soon As Publishable)."

Simetrías C, P, T, CP y CPT

Simetrías C, P, T, CP y CPT: "

Ezequiel Álvarez es físico teórico argentino que desarrolló su tesis doctoral como becario FPI en el Departamento de Física Teórica de la Universidad de Valencia, bajo la dirección de José Bernabéu Alberola (ganador del Premio Rey Jaime I de 2008 en Investigación Básica). Bernabéu es un catedrático de física teórica con un índice-h de 28. Os recomiendo su artículo de divulgación Josep Bernabeu Alberola, Enrique Fernández Cara, “Neutrinos: medio siglo de sorpresas,” Revista Española de Física 17: 35-47, 2003.


¿Por qué sacar a colación a Ezequiel en este blog? Porque he visto copia de su tesis doctoral (9 de marzo de 2006) en ArXiv sobre las simetrías T, CP y CPT en mesones B, en especial sobre la posible violación de la simetría CPT a través de la pérdida de indistinguibilidad del mesón B0 y su antipartícula, B0 (últimamente estoy leyendo mucho sobre la simetría CPT y su posible violación en la física del bosón de Higgs). La tesis se titula “CP, T and CPT analyses in EPR – correlated B B0 decays,” ArXiv, 14 Mar 2006. Me ha gustado la introducción en español, así que os dejo copia (parcial) para vuestro disfrute. Por otro lado, los más aficionados al tecnicismo disfrutarán también del resto (aunque en física de mesones me gustan más los trabajos de Eulogio Oset).


“La simetría es, tal vez, uno de los arquetipos más asombrosos e interesantes de la raza humana. Desde las primeras motivaciones del legado artístico del Homo sapiens sapiens, pasando por los egipcios, griegos, romanos, árabes y cada una de las civilizaciones, hasta el día de hoy, la podemos percibir y sentir en una considerable cantidad de invenciones y creaciones de la mente humana. Su poder es tan fuerte que hasta a veces puede llegar a corromper la frontera entre arquetipo e instinto.


La simetría ha demostrado ser una herramienta esencial para el desarrollo de la ciencia, y al día de hoy, es uno de los conceptos protagonistas de la física y matemática moderna. Los dos desarrollos teóricos más brillantes del siglo XX, la Teoría de la Relatividad y la Teoría Cuántica, incorporan nociones de simetría en un modo fundamental e irreemplazable. No sería una sorpresa si, en un futuro, las Leyes de la Naturaleza terminan escribiéndose únicamente en términos de nociones de simetría.


En física de partículas las simetrías se dividen en continuas y discretas. Las simetrías discretas más importantes son C, P, T y sus combinaciones CP, T y CPT. La conjugación de carga (C) es la operación matemática que cambia los signos de todas las cargas de una partícula, por ejemplo, cambia el signo de la carga eléctrica. Conjugación de carga implica que para cada partícula cargada existe una antipartícula con la carga opuesta. La antipartícula de una partícula eléctricamente neutra puede ser idéntica a la partícula, como es el caso del pión neutro, o puede ser distinta, como pasa con el anti-neutrón debido al número bariónico. La paridad (P), o inversión espacial, es el reflejo en el origen del espacio de coordenadas de un sistema de partículas; i.e., las tres dimensiones espaciales x, y, y z se convierten en −x, −y, y −z, respectivamente. La inversión temporal (T) es la operación matemática que reemplaza la expresión del tiempo por su negativo en las fórmulas o ecuaciones de modo tal que describan un evento en el cual todos los movimientos son revertidos. La fórmula o ecuación resultante que permanece sin modificaciones tras esta operación se dice que es invariante bajo inversión temporal, lo cual implica que las mismas leyes de la física se aplican en ambas situaciones, que el segundo evento es indistinguible del original. Una película de dos bolas de billar que colisionan, por ejemplo, puede ser pasada hacia adelante o hacia atrás sin ninguna pista sobre cuál es la secuencia original en que ocurrieron los hechos.


Hace medio siglo se pensaba que conjugación de carga, paridad e inversión temporal eran simetrías exactas de los procesos elementales, aquellos que involucran interacciones electromagnéticas, fuertes y débiles. Sin embargo, todo cambió a mediados de los 1950. C.N. Yang y T.D. Lee examinaron los fundamentos experimentales de la conservación de paridad en 1956, ya que la evidencia experimental apuntaba a una posible violación de la conservación de la paridad en la desintegración de mesones cargados K en dos o tres mesones π (piones). Su trabajo teórico demostró que no existía una demostración experimental robusta de la invarianza bajo paridad de las interaccciones débiles. Los experimentos llevados a cabo al año siguiente por la Dra. C.-S. Wu verificaron definitivamente que la paridad era violada en la desintegración (débil) beta. Más aún, revelaron que la simetría de conjugación de carga también era violada en este proceso. El descubrimiento de que las interacciones débiles no conservan ni la paridad ni la conjugación de carga separadamente condujeron a una teoría cuantitativa que establecía la combinación CP como una simetría de la Naturaleza.


De este modo los físicos razonaron que si CP era una entonces T debería serlo también debido al teorema CPT. Sin embargo los experimentos siguientes, llevados a cabo en 1964, demostraron que los mesones K eléctricamente neutros de vida media larga, que debían decaer en tres piones, decaían una fracción de las veces en sólo dos de estas partículas, violando así la simetría CP. Suponiendo el teorema fundamental de CPT, la violación de CP implica también una violación de T. En este teorema, considerado uno de los pilares de teoría cuántica de campos, conjugación de carga, paridad e inversión temporal son aplicadas todas juntas y, combinadas, estas simetrías constituyen una simetría exacta de todos los tipos de interacciones fundamentales. Cabe notar que constantemente se realizan experimentos para verificar la validez de la simetría CPT – que hasta el día de hoy siempre se ha visto respetada.


Las violaciones de CP y de T tienen importantes consecuencias teóricas. La violación de la simetría CP permite a los físicos realizar una distinción absoluta entre materia y antimateria. Esta distinción puede tener implicaciones profundas en el campo de la cosmología: una de las incógnitas teóricas en física es por qué este Universo esta formado principalmente por materia. Con una serie de debatibles, pero plausibles, presunciones, se puede demostrar que la relación entre materia y antimateria que se observa pudo haber sido producida por el efecto de violación de CP durante las primeras fracciones de segundo luego del Big Bang. Sin embargo, contrario a nuestras previsiones, la violación de CP medida en física de partículas hasta ahora no es suficiente para generar bariogénesis.”


Archivado bajo:Ciencia, Física, Personajes, Physics, Science Tagged: Ciencia, curiosidades, Física, partículas elementales, Personajes, simetría CPT, simetrías discretas "

miércoles, 10 de febrero de 2010

Thermally developing flow in finned double-pipe heat exchanger

Thermally developing flow in finned double-pipe heat exchanger: "A numerical solution of the convective heat transfer in the thermal entry region of the finned double-pipe is carried out for the case of hydro-dynamically fully developed flow when subjected to uniform wall temperature boundary condition. Adaptive axial grid size is used in order to cater for the variation of large solution gradients in the axial direction. It has been observed that the thermal entrance region is highly effective and there is a substantial enhancement in the heat transfer coefficient. A maximum of 76.4877% increase has been observed in the thermal entrance region as compared with the fully developed region for 24 fins and H*=0.6 when [Rcirc]=0.25, whereas for [Rcirc]=0.5 the maximum increase is 75.0308% for the same number of fins of same height. It has been observed that no geometry consistently perform better throughout the entrance region. However, the geometries that have optimal performance in the fully developed region perform better in the developing region on average terms. Results show that the Nusselt number and the thermal entrance length are dependent upon various geometrical parameters such as ratio of radii of the inner and the outer pipe, fin height and the number of fins. The limiting case results match well with the literature results. This validates our numerical procedure and computer code. Copyright © 2010 John Wiley & Sons, Ltd."

Information flux maximum-entropy approximation schemes for convection-diffusion problems

Information flux maximum-entropy approximation schemes for convection-diffusion problems: "The requirement for stabilization or other similar techniques is well known when using the finite element method in computational fluid mechanics. A variety of such techniques has been introduced during the past decades along with different physical interpretations of the stabilization terms employed. In introducing so-called information flux methods, we developed a new point of view on the problem of numerical instabilities; with respect to Shannon's information theory instabilities are interpreted as a consequence of unadequate observance of the information flux present in fluid mechanics. Here we discuss different approaches to setting up information flux maximum-entropy approximation schemes based on that idea. The good accuracy of these approximation schemes is demonstrated for convection-diffusion problems by means of several linear, time-independent one- and two-dimensional numerical examples and comparisons with state-of-the-art stabilized finite element methods. Copyright © 2010 John Wiley & Sons, Ltd."

A Direct Solver for the Rapid Solution of Boundary Integral Equations on Axisymmetric Surfaces in Three Dimensions. (arXiv:1002.2001v1 [math.NA])

A Direct Solver for the Rapid Solution of Boundary Integral Equations on Axisymmetric Surfaces in Three Dimensions. (arXiv:1002.2001v1 [math.NA]): "

A scheme for rapidly and accurately computing solutions to boundary integral
equations (BIEs) on rotationally symmetric surfaces in three dimensions is
presented. The scheme uses the Fourier transform to reduce the original BIE
defined on a surface to a sequence of BIEs defined on a generating curve for
the surface. It can handle loads that are not necessarily rotationally
symmetric. Nystrom discretization is used to discretize the BIEs on the
generating curve. The quadrature used is a high-order Gaussian rule that is
modified near the diagonal to retain high-order accuracy for singular kernels.
The reduction in dimensionality, along with the use of high-order accurate
quadratures, leads to small linear systems that can be inverted directly via,
e.g., Gaussian elimination. This makes the scheme particularly fast in
environments involving multiple right hand sides. It is demonstrated that for
BIEs associated with Laplace's equation, the kernel in the reduced equations
can be evaluated very rapidly by exploiting recursion relations for Legendre
functions. Numerical examples illustrate the performance of the scheme; in
particular, it is demonstrated that for a BIE associated with Laplace's
equation on a surface discretized using 320 000 points, the set-up phase of the
algorithm takes 2 minutes on a standard desktop, and then solves can be
executed in 0.5 seconds.

"

The Mystery of the Shape Parameter II. (arXiv:1002.2082v1 [math.NA])

The Mystery of the Shape Parameter II. (arXiv:1002.2082v1 [math.NA]): "

In this paper we present criteria for the choice of the shape parameter c
contained in the famous radial function multiquadric. It may be of interest to
RBF people and all people using radial basis functions to do approximation.

"

Efficient component-wise and solver-based intrusive SFEM analysis of complex structures

Efficient component-wise and solver-based intrusive SFEM analysis of complex structures: "Publication year: 2010
Source: Finite Elements in Analysis and Design, In Press, Corrected Proof, Available online 9 February 2010
H.M., Panayirci , M.F., Pellissetti
In this study, important computational aspects of intrusive stochastic finite element methods (SFEM) are investigated. The importance of the interaction with 3rd party deterministic finite element (FE) solvers is emphasized within this context. Especially, data management is in the focus. Two implementation techniques, namely the component-wise and the solver-based implementations, have been introduced in order to improve the computational efficiency. It is shown that by reducing the quantity and the size of the transferred system matrices, a significant reduction in the computational cost can be achieved. This is demonstrated on representative FE models, which are analyzed with the perturbation method,..."

An enriched discontinuous Galerkin formulation for the coupling of non-conforming meshes

An enriched discontinuous Galerkin formulation for the coupling of non-conforming meshes: "Publication year: 2010
Source: Finite Elements in Analysis and Design, In Press, Corrected Proof, Available online 9 February 2010
G., Haikal , K.D., Hjelmstad
Finite element modeling using non-conforming meshes requires an interface model that ensures geometric compatibility and a complete transfer of surface tractions between the connecting elements at the non-conforming interfaces. Most currently available coupling methods are dual approaches that employ a field of Lagrange multipliers to enforce geometric compatibility at the interface. The choice of the Lagrange multiplier field is not trivial since not all possible interpolations satisfy the inf–sup or Ladyzhenskaya–Babuška-Brezzi (LBB) condition. The primal discontinuous Galerkin (DG) and Nitsche methods are not subject to the LBB restrictions, however, in both these methods a mesh-dependent penalty parameter is required to..."

Natural and artificial atoms for quantum computation. (arXiv:1002.1871v1 [quant-ph])

Natural and artificial atoms for quantum computation. (arXiv:1002.1871v1 [quant-ph]): "

Remarkable progress towards realizing quantum computation has been achieved
using natural and artificial atoms. On the one hand, natural atoms (such as
neutral atoms and ions) have long coherence times, and could be stored in large
arrays, providing ideal 'quantum memories'. On the other hand, artificial atoms
(such as superconducting circuits or semiconductor quantum dots) have the
advantage of custom-designed features and could be used as 'quantum processing
units'. Natural and artificial atoms can be coupled with each other and can
also be interfaced with photons for long-distance communications. Hybrid
devices made of natural/artificial atoms and photons may provide the
next-generation design for quantum computers.

"

Algebraic multilevel iteration methods and the best approximation to $1/x$ in the uniform norm. (arXiv:1002.1859v1 [math.NA])

Algebraic multilevel iteration methods and the best approximation to $1/x$ in the uniform norm. (arXiv:1002.1859v1 [math.NA]): "

In this note, we provide simple convergence analysis for the algebraic
multilevel iteration methods. We consider two examples of AMLI methods with
different polynomial acceleration. The first one is based on shifted and scaled
Chebyshev polynomial and the other on the polynomial of best approximation to
$x^{-1}$ on a finite interval with positive endpoints in the uniform norm. The
construction of the latter polynomial is of interest by itself, and we have
included a derivation of a 3 term recurrence relation for computing this
polynomial. We have also derived several inequalities related to the error of
best approximation, which we applied in the AMLI analysis.

"

Analytic Regularity for Linear Elliptic Systems in Polygons and Polyhedra. (arXiv:1002.1772v1 [math.AP])

Analytic Regularity for Linear Elliptic Systems in Polygons and Polyhedra. (arXiv:1002.1772v1 [math.AP]): "

We prove weighted anisotropic analytic estimates for solutions of model
elliptic boundary value problems in polyhedra. The weighted analytic classes
which we use are the same as those introduced by B. Guo in 1993 in view of
establishing exponential convergence for hp methods in polyhedra. We first give
a simple proof of the weighted analytic regularity in a polygon, relying on new
elliptic a priori estimates with analytic control of derivatives in smooth
domains. The technique is based on dyadic partitions near the corners. This
technique can be successfully extended to polyhedra, but only isotropic
analytic regularity can be proved in this way. We therefore combine it with a
nested open set technique to obtain the three-dimensional anisotropic analytic
result. Our proofs are global and do not rely on the analysis of singularities.

"

martes, 9 de febrero de 2010

Noise Stability and Threshold Circuits

Noise Stability and Threshold Circuits: "

The purpose of this post is to describe an old conjecture (or guesses, see this post) by Itai Benjamini, Oded Schramm and myself (taken from this paper) on noise stability of threshold functions. I will start by formulating the conjectures and a little later I will explain further the notions I am using.


The conjectures


Conjecture 1: Let f be a monotone Boolean function described by monotone threshold circuits of size M and depth D. Then f is stable to (1/t)-noise where t=(\log M)^{100D}.


Conjecture 2: Let f be a Boolean function described by monotone threshold circuits of size M and depth D. Then f is stable to (1/t)-noise where t=(\log M)^{100D}.


The constant 100 in the exponent is, of course, negotiable. In fact, replacing 100D with any function of D will be sufficient for the applications we have in mind. The best we can hope for is that the conjectures are true if t behaves like t=(\log M)^{D-1}.


Conjecture 1 is plausible but it looks difficult. The stronger Conjecture 2 while tempting is quite reckless. Note that the conjectures differ “only” in the location of the word “monotone”. (In the first conjecture the first appearance of “monotone” is not essential since every function that can be described by a monotone threshold circuit is monotone.)


Recursive majority on 3s


There are many Boolean functions that are very noise sensitive. A simple example is the recursive majority on 3s (RM3), defined as follows: Suppose that n=3^m. Divide the variables into three equal parts. Compute the RM3 separately for each of these parts and apply the majority to the three outcomes.


Computational complexity consequences


C1 (To Conjecture 1): The RM3 is not in Monotone TC_0.


C2 (To Conjecture 1): 2. The RM3 cannot be approximated by a function in Monotone TC_0.


C3 (To Conjecture 2): The RM3 is not in TC_0.


C4 (To Conjecture 2): The RM3 cannot be approximated by a function in TC_0.


The first consequence, may already follow from results by Andy Yao and by Johan Hastad and Mikael Goldman. (See this paper. They talked about the “AND/OR-tree” rather than RM3.)


Of course, we can replace RM3 with other noise-sensitive properties. For example, we can take the crossing event in planar percolation which is closely related to the problem of “s-t connectivity”. (See this post about noise sensitivity.) We can extend the conjectures by replacing the uniform probability distribution by other product probability distributions; doing so will allow us to replace RM3 by the AND/OR-tree. It is reasonable to believe that all these conjectures are true, but also that C3 and C4 are extremely difficult. I would also not underestimate the gap between the exact conjectures C1 and C3 and the approximate conjectures C2 and C4. Showing that RM3 cannot even be approximated by a function described by monotone (or general monotone) threshold circuit of bounded depth and polynomial size might be very hard. Maybe RM3 does admit such an approximation.


Some basic definitions


Let f be a Boolean function of n variables x_1,x_2.\dots,x_n. Let t>0 be a small real number. Assign values to the variables uniformly at random and then consider y_1,y_2,\dots,y_n such that y_i=x_i with probability 1-t and y_i=1-x_i with probability t.


Fix a constant \epsilon = 0.0000001. We say that a Boolean function f with n variables is stable to noise t if the probability that f(x_1,x_2,\dots, x_n) \ne f(y_1,y_2,\dots, y_n) is below \epsilon.


A threshold function is a function of the form: f(x_1,x_2,\dots,x_n)=sgn(\sum w_ix_i), where w_1,w_2\dots w_n are real weights. If all weights are nonnegative the function is called a monotone threshold function (or a weighted majority function).


A threshold circuit is a circuit all whose gates are threshold linear functions. (We mentioned circuits in this post.) A monotone threshold circuit is a threshold circuit in which all gates are monotone threshold functions.


The AC_0 analogs


Here is briefly the situation for AC_0.


Theorem (Boppana): If f is a function that can be described by a depth D size M monotone Boolean circuit then I(f) \le C(\log M)^{D-1}.


Here I(f) denotes the total influence of f. See this post about influence.


The famous Hastad switching lemma allows us to prove


Theorem (Hastad-Boppana): If f is a function that can be described by a depth D size M monotone Boolean circuit then I(f) \le C(\log M)^{D-1}.


Theorem (Linial Mansour Nisan; improved by Hastad) : If f is a function that can be described by a depth D size M monotone Boolean circuit then \{\sum \hat f^2(S):|S|=t\} decays exponentially with t when t>C(\log M)^{D-1} (See the original papers for the precise formulation and the meaning of “decay exponentially”.)


Positive vs. Monotone


We regard consequences C1/C2 as plausible, but consequences C3/C4 as extremely difficult. But can we present a single example of a function for which C3/C4 apply but not C1/C2? This is also difficult!




Theorem (Miklós Ajtai and Yuri Gurevich): There are monotone functions in AC_0 that are not in Monotone AC_0. (Here is the paper!)


Problem 1: Are there monotone functions in AC_0 that cannot be approximated by functions in Monotone AC_0?


Formally we ask, for every fixed real \epsilon>0 and integers d>0,~c>0,~d'\ge d, and c' \ge c, for a sequence of monotone functions f_n of depth d and size at most Kn^c (K and c are fixed constants and f_n is a function on n variables), such that:


The distance between f_n and every function g that can be expressed by a depth d' , size n^{c'} monotone Boolean circuit is at least \epsilon.


Problem 2: Are there monotone functions in TC_0 that are not in Monotone TC_0?


Problem 3: Are there monotone functions in TC_0 that cannot be approximated by functions in Monotone TC_0?


Natural proofs barrier?


One reason to be suspicious about Conjecture 2 is the famous natural proof barrier in computational complexity pioneered by Rasborov and Rudich. A class of functions that allows the creation of pseudo random generators cannot be separated by a “natural proof”. Moni Naor and Omer Reingold described pseudorandom functions in TC_0.


A natural proof is very roughly a method that distinguishes a low complexity function from a typical Boolean function. You should not be satisfied by this very rough description and read Dick Lipton’s post “Who’s afraid of natural proofs” or the original paper or many other sources.


A random monotone Boolean function is noise stable. In fact a random monotone Boolean function is very close to the majority function. So the natural proof barrier does not apply automatically to Conjecture 2.


Dick Lipton suggested in his post (in a specific way that we will not repeat here) that monotonicity may be used to get around the natural proof barrier. The property of being monotone is fairly mysterious and we do not understand it well enough. As we mentioned, typical monotone functions are very much like the majority function.


Suppose we want to prove that NP is different from P by presenting a property X of monotone Boolean functions such that monotone Boolean function which satisfy X are not in P, and are not even \epsilon-close to P. We may expect that such a property X will not hold for random monotone Boolean functions, as the latter are so similar to simple majority. Does this give some hope to get around the natural proof barrier? Probably not more than a faint hope. (It is difficult to imagine a proof technique that will use monotonicity of a target function for non monotine circuits.) In any case, we should take note of it:


Question: Does monotonicity give a way to get arround the natural proof barrier?


A more promising avenue is going from the computer science constructions to combinatorics and probability questions. One interesting direction would be to try to use the constructions of Boolean functions coming from cryptography and complexity, such as the Naor-Reingold construction, to shed light on combinatorial and probabilistic properties of Boolean functions.


Small remarks.


It is not obvious that weighted majority functions are noise stable. Here is a sharp argument by Yuval Peres.


Some small remarks about Conjecture 1: a Boolean function f described by a two-round majority represents a threshold circuit of depth two. Proving noise stability for two-round majority is easy by induction. If you use the same variable more then once then things get more complicated. (I do not have a clue for the case that you allow negative weights.) With Ravi Kannan we looked at monotone threshold circuits of depth two, and noticed that in some cases we can deduce from noise stability of the functions described by the gates that the function itself is not completely noise-sensitive to the required level of noise. (But this is considerably weaker than what we really need.)


One can conjecture that if a function described by a bounded depth monotone threshold circuit is balanced (namely, the probability for f=1 is bounded away from 0 and from 1) then it is stable to a constant level of noise. However, Jean Bourgain gave an example showing that this is not the case.


Polynomial threshold gates.


There is fruitful recent research on functions that can be expressed as signs of low degree polynomials.


"