Google Scholar

Tuesday, December 08, 2009

Jack Schwartz, the same Jack Schwartz as in Jack Schwartz , Nelson Dunford "The Theory Of Linear Operators" (I studied it in 3d grade) contributed to "Computer Science" and even designed a programming language which influenced Python

Saturday, June 20, 2009

If you are not a windows user and you read Time Series Analysis with Application in R ( by Jonathan D. Cryer , Kung-Sik Chan @amazon), then
in R code replace
win.graph by

Friday, May 29, 2009

Tuesday, May 05, 2009

Guys from Northwestern University developed a computational model to predict s spread of swine flue.

At the heart of simulation commuter patterns and air traffic. Some data on face to face transactions are taken from "where's George", a web site tracking dollar bills
Some numbers
the typical flu has a reproductive number of 1.7 to 2.8
the number of cases doubles every 2.3 days
“You could have a student transmitting to 15 others, while the average in Queens is 0.1

Tuesday, April 28, 2009

R graph gallery

Saturday, February 21, 2009

Octave for Mac OS X
After trying to use darwin ports, compiling by myself and other options to install Octave for Mac OS X, finally I install this port
(linked from)
it is pretty consistent with the current linux version

Monday, January 26, 2009

Suita c.2 BWV 1067 Badinerie

Friday, December 12, 2008

The California Air Resources Board voted unanimously to adopt the nation’s most comprehensive global warming plan, outlining for the first time how individuals and businesses would meet a landmark 2006 law that made the state a leader on global climate change.
Green Inc.

Wednesday, July 02, 2008

How to install gnuplot at your apple computer

If you want to install "a traditional" gnuplot and you do not care if it can be controlled by AppleScript or if it exports to QuickTime format, then I'd recommend you to install DarwinPort gnuplot port

If you do not have DarwinPorts installed
download and install them here -> Darwin ports (click on Download)

After you install Darwin port
just add /opt/local/bin (this is the place where DarwinPort in installed) to your PATH
and then run
>sudo port install gnuplot
the installation takes around 5-10 minutes (retrieval and compiling gnuplot and other packages it depends on)

I tried it and it works on my MacBookPro

Saturday, June 28, 2008


Today is a Wall-E day for my family gives Wall-E impressive 97%

Wednesday, November 28, 2007

Deloitte Technology Fast 500 EMEA
Israeli firms take first, second and third place.

Thursday, October 11, 2007

Miracles of Social Networking

I have found several my classmates from a high school.
I have not seen them for 15 years.

Monday, September 10, 2007

Shana Tova

Sunday, September 02, 2007

The Toyota-Aisin Crisis, Impressive

Read an analysis of the Toyota Aisin crisis by Dunkan Watts (Dunkan Watts, Six Degrees: The Science of a Connected Age)
Aisin Seiki, a spun off of Tyota produces a class of devices call P-valves, which are used in all Toyota cars to prevent skidding. On account if its spotless performance records, Aisin Seiki has become Toyota's sol provider of P-valves. Due to reasons of efficiency, Aisid has chosen to locate al P-valve production in a single factory, the Kariya plant 1, which was producing 32500 valves a day in 1997. Because just in time system is adopted in Toyota (for maximum flexibility), Toyota was holding only about two days' worth of P-valves in stock.
On Saturday, February 1, 1997, the Kariya plant burned down. All production lines producing P-valves were destroyed. It would take months to rebuild production capacity. At the time Toyota was rolling more then 15000 cars a day. In two days all priduction had ceased.
In an astonishing coordinated response by over 200 firms and almost no centralized control, production of more then one hundred P-valves was reestablished within three days. In a week production was restored.
How did it happen?
Toyota encourages simultaneous design and engineers, it is an everyday activity for many Toyota (as a group of companies) engineers. All the companies (there are more then 20 of them in Toyota group!) possess a common understanding of how problems should be solved. Many firms which are part of Toyota groups exchange personnel and enfgineering design in their everyday activities despite the fact that some of these firms are competing between each other. Toyota engineers are experienced in each other informations resources, lines of communications. They understand and trust each other.
Basically, the crisis was divided in a distributed way among all 200 firms, some of them had to change own production plans completely and re-allocate engineers.

See also
How Toyota Handles...

The outcome:
inter-team cooperation, inter-team problem solving, personnel exchange, common understanding of approaches and how problem should be solved may give a very impressive results.

Labels: , ,

Sunday, May 06, 2007

Technorati started to use "authority" to rank blogs instead of the number of links to a blog as it was before.
They provide an authority widget which displays the rank of a blog.
As far as I understand from this post, the authority is the number of links per last 180 days

Tuesday, May 01, 2007

HP 49G+

I have bought yesterday


  1. Andy Greenberg "Condemned To Google Hell" In Forbes
  2. and response by Matt Cutts (Google)
  3. The Hacker's Guide to Investors by Paul Graham
  4. The Ploy from The Atlantic, The inside story of how the interrogators of Task Force 145 cracked Abu Musab al-Zarqawi’s inner circle—without resorting to torture—and hunted down al-Qaeda’s man in Iraq

Wednesday, August 02, 2006

I have discovered that Luca Trevisan (Well-known for his results in approximation algorithms and PCP theorem) writting a blog devoted to Computational Complexity, Math.
I liked a lot a series of posts about the Szeremedi's theorem,
Szemeredi's theorem
Analytical approaches to Szemeredi's Theorem: k=3
Analytical approaches to Szemeredi's Theorem: general case
Polynomials and subspaces
Property testing and Szemeredi's Theorem
his recent post about the real value of approximation factor for approximation algorithms and his post about Gowers uniformity

Monday, July 17, 2006

My PhD viva day

Friday, July 14, 2006

I was awarded the PhD degree today.
I'll put some pictures online after Shabbat

Tuesday, June 20, 2006


  • Vijay Vazirani, Approximation Algorithms
    Well-written anc comprehensive approach to approximation algorithms from a classical point of view. The first part of the book provides approximation algorithms for many combinatorial NP-hard algorithms, and it is perhaps the best algorithm-oriented introduction into approximation algorithms. The second part of the book describes connections between approximation and LP (Logical Programming) by means of LP-relaxation or dual-schemas. Contains LP-based approximatioon algorithms for many combinatorial problems with proofs of upper bounds. The third part is a selection of topics. Only chapter 28 (counting problems) was interesting for me. Chapters 27(shortest vector) and 29 (hardness of approximation) are not in particular deep comparing to other books

  • Approximation Algorithm for NP-hard problems by Dorit Hochbaum is a set of chapters by different contributors. Perhaps, the best source on approximation algorithms. Contains an excellent chapter (Chapter 10) by Sanjeev Arora and Carsten Lund on hardness of approximation, which introduces 4 main approximation complexity classes for NP-hard problems and contains a small compendium of hard-problems for those classes with proofs of hardness of approximation. The chapter is available online at Arora's website. Recommended to read in conjunction with Hougardy's books chapter. (Arora's chapter does not contain proofs of in-approximability for class IV, see Hougarchy's chapter for that). Contains an exccelent chapter on approximation of counting problems by Monte Carlo method. Chapter 3 was in particular useful in my work since it is an excellent survey of algorithms and hardness resuls for Set Cover and Independent Set problems commonly met in inconsistent database theory. Chapter 9 by Hochbaum is an introduction into different classes of approximation by measure approximation factor & time. Basically it focus on constant-factor approximation (CLASS I according to chapter 10) and PTAS problems and described different efficient algorithms for them. Very good from practical points of view since it considers different techniques to get shorter times and better approximation factors for "practical" problems

  • Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties by Ausiello et al. A texbook which is good for undergraduate or non advanced graduate course. A chapter about probalistic analysis was in particular interesting for me, the others seem to be oversimplified.

  • Probabilistically checkable proofs and their consequences for approximation algorithms by Stefan Hougardy et al. In: Walter A. Deuber, Hans Jürgen Prömel und Bernd Voigt (Herausgeber): Trends in Discrete Mathematics, Seiten 175-223. North Holland.
    The first part of this chapter contains proof of in-approximation properties for the clique number (simple constant factor non-approximation and more involved proof n^\epsilon non-approximation which is a good introduction into hardness proofs for CLASS IV), chromatics number(simple proof by Khanna, Linial, Safra), and MAX-SNP hard problems (syntactic class introduced by Papadimitriou and Yannakakis, which contains many common combinatorial problems like MAX-SAT, MAX-CUT, EUCLIDIAN TSP)

  • Lecture notes on approximation algorithms by Prof. Michel Goemans, MIT. A very short introduction into hardness of approximation, then good introduction into LP-rounding techniques, analysis of algorithms and approximation algorithms for max-cut, bin packing problems. Recommended to understand the analysis of approximation algorithms.

  • Probabilistic Checking of Proofs and Hardness of Approximation Problems by Sanjeev Arora. For hatdness of approximation results, see chapter 6 . A comprehensive introduction into proof of hardness results. I would recommend to read after Arora, Lund chapter at Hochbaum book and Hougardy's chapter.

  • A comprehensive and up-to-date the compendium of NP optimization problems by Creszenzi and Kann which contains approximation-non-approximation results for optimization problems.

Saturday, June 17, 2006

My viva at the University of Manchester is scheduled for July 14th
We organized (I am a member of the Organizing Committee) "Second International Workshop on Exchange and Integration of Data" which was attended by 60 participants

Monday, May 01, 2006

I was asked by a student what is the best concise reading about C++ template meta-programming and compile-time features like enforcements ("constraints").

I would strongly recommend to read Chapter 3 of "Modern C++ Design" by Andrei Alexandrescu.
In this Chapter Alexandrescu demonstrates how one can implement Lisp like lists and functions to manipulate them which are available at compile time for automatic code generation.
Basically, he implements a Typelist - a template to store a list of types and a set of functions like a length of the list, indexed access, searching, appending, erasing duplicates, partially ordering (according to subclass hierarchy). All functions are implemented as recursive compile-time routines.
The chapter is well-written and can be a good introduction into functional programming primitives into C++ compile-time programming.
Besides that, the Chapter gives a good demonstration how these features can be used.

Another excellent introduction into C++ compile-time features is "Compile-Time Contracts: Constraints" section of "Imperfect C++ Practical Solutions for Real-Life Programming" by Willson.
This section contains several examples of implementation of compile-time constraints in C++. For example, such constraints as a class A must be a subclass of class B, type C most be pod, typer A and type B must be the same size, et cetera. These features are very useful for generic programming and other chapters of the same book contain an excellent demo.

Thursday, April 27, 2006

I have migrated my "database repair" code to STL bucket hash tables from my own implementation bucket hash tables.
STL bucket hash tables were standardized in TR1 (August 2005, )
GNU C++ supports them (TR1 in GNU GCC )
In a few words, C++ TR1 declares a new kind of STL container "unordered container" which are alike "associate containers", but they do not support 6 comparison operators and provide a set of operations for examining contents of individual buckets and tuning the container.
Bucket hash tables are declared in unordered_set and ordered_map files. New ADTs are in std::TR1 namespace
For a short introduction into new STL features of TR1, see
Article 1
Hash tabled

Wednesday, April 12, 2006

Happy Passover!

Hag Sameah! Happy Pessah to Everybody!
May your cup of happinees overflows :)

Thursday, March 23, 2006

I have not been posting for a while because I had been finishing my PhD thesis.
Today I have submitted it to the Student Service Center.
I'll start blogging again next week after EDBT.

Wednesday, March 08, 2006

A very simple non optimized C++ implementation of approximation algorithms for Minimum Weighted Set Cover problem
MWSCP.solveGreedy() implements Greedy algorithm
MWSCP.solveLayered() implements Layered algorithm

As it is well-known solveLayered is optimal in respect to the MWSCP instances with bounded frequency of elements. If the frequence of element is bounded by k, then solveLayered algorithm produce approximate solution greater not more k times then optimal solution.
Greedy algorithm may produce log n factor solution even for bounded frequency instance of the MWSC problem.

I have tried these algorithm for many randomly generated instances of the MWSCP.
solveGreedy outperforms solveLayered around 10 times, while produce a solution a few times cheaper

Thursday, March 02, 2006

test Mac Widget

NP-Completeness by D. S. Johnson

NP-Completeness column started by D. S. Johnson in Journal of Algorithms in 1982 and stopped to exist at 1992 appeared again a semi-annual NP-completeness column in ACM Trans. Algorithms.
The main purpose of the NP-completeness column was to provide updates and addition to the compendium at the end of Garey & Johnson book. Besides a list of solved/appeared problems the column contained materials about new techniques and developments related to the theory of NP-completeness like PCP theorem, ZKP etc.
The first article of the revived columns is devoted to the problems solved in the last 13 years problems.
4 important problems are solved:

  • COMPOSITE NUMBER is in PTIME was solved by M. Agrawal, N. Kayal, N. Saxena in 2002,

  • IMPERFECT GRAPH is in PTIME If given graph G is not perfect. The problem was solved last year by Chudnovsky, Cornuejols, Liu, Seymour, Vuskovic

  • EVEN COVER Given a collection C of subsets of a finite set X and k > 0, if there a nonempty subcollection C' of C of the size < k, such that ech element of X is in an even number of sets of a subcollection C'
  • The problem was proved to be NP-complete by Alexander Vardy in 1997
  • SHORTEST VECTOR IN A LATTICE given a collection of n-vectors v_1, ..., v_n, over rational numbers and integer B > 0, is there a nonzero n-vector a over integers, such that if x = \sum a_i v_i, then the Euclidian distance (x,0) is B or less. The decission version of the problem is known to be NP-complete since 1981 (by van Emde Boas). In this article D. Johnson review approximation algorithms for an optimization version of the SVP. It was known since along time ago that this problem can be approximated in PTIME but with exponential approximation factor (Schnorr 1987). The best approximation factor known now is a constant factor and the SVP can be approximated by constant-factor algorithm if NP \setbseteq RP.

At the end of the article Johnson givew a review of three well-known problems which are still open.

Tuesday, February 28, 2006


I have given a short review of algorithms for 5-COLORING of planar graph. The best known algorithm is linear in the number of vertices
What is about 4-COLORING of planar graph. We know that each planar graph is 4-COLORABLE is it easy to find 4-COLORING.
The polynomial, namely n^4 algorithm was given by K. Appel, W. Haken in 1989. K. Appel, W. Haken, "Every planar map is four colorable", Contemp. Math. 98, 1-741
The algorithm is based on their original proof.
Then their result was improved by Robertson, Sanders, Seymour, Thomas in STOC-96 paper "Efficiently four-coloring planar graphs" and this algorithm has a quadratic complexity.

What is interesting that a problem of 4-COLORING is NP-hard for a 3-colorable graph (meaning given a graph G, which is known to be 3-COLORABLE, find 4-COLORING of it)
This result was prooved in 1993 by Khanna, Linial, Safra, "On the hardness of approximating the chromatic number", 2nd Israel Symposium on Theory and Computing Systems (ISTCS). Their proof uses PCP theorem. Their result immediately generalize to NP-hardness of k + 2 floor(k/3)-1 coloring of k-colorable graphs.
In 2000 Guruswami and Khanna in their paper presented at IEEE CCC gave a nice proof of NP-hardness of 4-COLORABILITY without use of the PCP theorem. Actually they proved even stronger result: 4-COLORING is NP-hard for degree-bounded 3-COLORABLE graph

Another, interesting result about coloring (not just 4-coloring). There does not exist a polynomial algorithm which approximates coloring problem with the factor better then n^\epsilon, for some \epsilon. Very easy to comprehend explanation of the proof (as well many other approximation-hard results) is in excellent a bok chapter about PCP theorem written by Stefan Hougardy

Monday, February 27, 2006

Carnevale di Venezia, yesterday

carna-2006 - 20.JPG
Originally uploaded by alopatenko.


Friday, February 24, 2006

Logic and Databases workshop on Cambridge

Next week
An Isaac Newton Institute Workshop
Logic and Databases

Prof. Bertossi is going to present results we obtained last Summer during his visit to Bolzano.

Thursday, February 23, 2006

The Graph Coloring Problem

The problem: given a planar graph G and the number k, color G (means find a map V -> C, where V is a set of vertices of G, C is a set of k colors), such that no two adjacent vertices have the same color.
It is well-known that 3-coloring for planar graph is NP-complete.
Lipton and Miller (Information Processing Letter, 1978, v. 7, 4, "A batching method for coloring planar graphs") proved that 5-graph coloring is in O(n log n). They improved the result of D. Johnson, O(n^2) algorithm for 5-coloring (D. Johnson, Worst case behaviour of graph algorithm, 1974). After their result was impoved by Chiba, Nishizeki, Saito (A linear-time algorithm for five-coloring a planar graph, LNCS 108, 1981). Joseph Naor discovered that five coloring is in NC , or it can solved in polylogarithmic time on the machine with polynomial number of processors (to be more exact in O(log^5) time on O(n^3) processors)

The result is presented at J. Naor, A Fast Parallel Coloring Of Planar Graphs With Five Colors, Information Processing Letters 25(1987), 51-53

Herein the outline of the algorithm

Subroutine MAXINDSET: computing the maximal independent set of a planar graph: known to be in NC (log^2 time on n^3: Luby, A simple parallel coloring for the maximal independent set, SICOMP 15(4), 1986)

Algorithm is based on the recursion
1 delete a set of independent vertices whose size is at least a fixed fraction A of the graph
2 apply COLORING to the remaining graph
3 extend the obtained coloring to the deleted vertices

since A is fixed, then the number of runs is logarithmic.
MAXINDSET is in NC, as well the set of deleted vertices is in NC
Extending the coloring is in NC

The Result is based on two simple lemmas
Lemma: More then 1/6 n vertices have degree less then 7
Proof: Suppose that the number of vertices with degree less then 7 is more then 1/6 n.
In that case the sum of degrees is more then 6n.
but it is known that the sum of the degrees in the planar graph is at most 6n-12

Lemma: Let U be a set of vertices whose degree is less then 7. The size of every maximal independent set in U[G] (induced subgraph defined by G) is at least 1/7|U|
Proof: obv.

So, the five coloring algorithm

1 find a maximal independent set I in the induced subgraph defined by the vertices of degree less then 7
2 delete I from the graph G' = G \ I, apply COL = COLORING(G')
3 extend 5 coloring from G' to G
for each v in I
a the number of neighbours of v is less then 5, then color v; continue
b Let a and b be a pair of colors which as chosen by at least 1/10 of


Wednesday, February 22, 2006

the database connection pooling

Just to summarize and add a few words to This discussion (a link to the thread at The Design of Software)

Whenever one implements a 3-tier application, the database connection pooling must be used.
  1. light-weight applications
  2. application with security policy implemented at the database side: if each application user has to log in to the database under his database account a) so the security restrictions of data access can be applied b) to audit his actions
  3. database security capabilities are used to identify and authorise user
Database connection pooling is supported by most database connectivity API

It is supported by
  1. JDBC starting from JDBC 2.0 (see a tutorial on JDBC connection pooling by Sun or an Oracle Magazine article ) so Java program can use it
  2. ODBC starting from ODBC 3.0 (see Microsoft's FAQ about connection pooling for ODBC), so C++, MS Visual Basic etc application can use it
  3. proprietary database connection APIs like Oracle Call Interface (OCI) see OCI Programmer Guide, Ch. 9 C++, C

  1. the creation of a database connection takes some time (not short, up to 1 second to Oracle DBMS), so if users frequently connect/disconnect to the database, then connection pooling must be used. A good example would be an internet shop, such that many users connected annd disconnect it . From my experience I would 10 connections per second is a limit for non-pooled database access.
  2. each database connection gets some memory at both client and server side to handle it. Even worse, if you run your Oracle in dedicated server mode, then each client has its own server process. It is not good at all if you do not run long-running queries, a batch job or some administrative task like using RMAN to restore/back up a database.
  3. so, performance certanly degradated if you use one connection per user see some experiments JDBC Connection Pooling Best Practises
  4. database vendor may charge you per connection or the number of connections may be restricted by some DBA policy .
  1. you have to implement proper multi-threading at the 2nd layer, see examples at the tutorial above how to do it
  2. some database engines or server configurations may restrict the number of database locks per connection
  3. performance may decrease if there are too many users so there are no enough connection in the pool to handle them. Just increase the number of connections. When the number of user decrease, decrease the number of connections. If your application is dynamic ... like internet-shop and the number of users very singificantly in time then creation and deletion of connection in the pool may degrate performance. you need some techniques based on queuu theory to forecast the number of connections.

Tuesday, February 21, 2006

test data

If you need a large database for experiments or a demonstration,
that I would recommend to use the IMDB database (
It contains around 30 tables, some of them contain around 2M entries.
There are foreign keys, keys.
The database is a set of plain text files, but it took 2 hours of my time to write perl script which convert it to Oracle SQL*Loader dump file.
The only problem is that you can not use this database for online demonstrations since data are copyrighted by IMDB.

structural equivalence of boolean functions and the graph isomorphism problem

One of the important problems for synthesis and verification of Boolean circuits is:
if two Boolean functions are equivalent.

Two booleans functions of the same arity are equivalent if they map the same values into the same values.
It is well known that this formula is NP-complete in respect to the size of the Boolean function representation.

given the representation of two Boolean functions as propositional formulae \Psi_1 and \Psi_2: is there exist a mapping from from variables \Psi_1 to variables of \Psi_2 with transposing o 1nd 1 states at any lead such that the formalue will be syntactically equivalent.

Or two Boolean function representations are on the same orbit of the group generated by permutation and complementations of the variables

Basicaly it means:
given two circuit descriptions, do they represent the same physically circuit?

The complexity of this problem is not known, it is know that this problem is equivalent to the GI Graph Isomorphism problem.

Babai Luks algorithm for GI runs in O(c^n^{1/2})
For graphs of genus \leq g, the GI can be solved in O(n^{O(g)})
It is known (as a consequence of Bodlaender theorem), that for graphs with a bounded tree-width GI can be solved in polynomial time.

Let Aug(G) denote the automorphism group of G
The following problems are PTIME equivalent
  1. Graph Isomorphism
  2. Graph Isomorphism Counting (the number of isomorphism from G to H)
  3. Complement Isomorphism: If G is isomorphic to its compliment
  4. Generating Set for Aut(g): determine a generating set for Aut(G)
  5. context-free grammar isomorphism
  6. conjuctive-query isomorphism
There is an evidence that GI is not NP-hard. Bopanna, HAstad, Zachos has shown (1982) that if the GI is N P-hard then polynomial hierarchy collapes to \Pi^p_2

Monday, February 20, 2006

minimum storage representation

this post is about the well-known graph-theoretic problem "Minimum Storage Representation"
Given a graph (V,E) determine a minimal graph (V, E*) such that there is a (directed) path from i to j in G iff there is a (directed) path from i to j in G*.
Basically, we have to find a minimum storage representation of the graph, which still allows us to answer reachability queries. Of course, there is a certain trade-off between minimum query time and minimum storage representation, but we do not care about it now.
In case if we impose a restriction that E* \subseteq E then the problem is named as Minimum Equivalent Graph (MEG) if a graph is undirected or Minimum Equivalent Digraph (MED) if graph is directed. MED has the number GT33 in Garey, Johnson
It is well-known that a decission version of the MEG(MED) problem (given graph G and the number k, if there exist a representation of the size \leq k) is NP-complete. By reduction of Directed Hamiltionian Circuit.
What happens in case if we relax a restriction that E* \subset E? In that case the problem is named a transitive reduction of G. For each acyclic graphs MEG = transitive reduction and for each acyclic graph there exists only one MEG.
Aho, Garey and Ullman (Aho Garey Ullman The transitive reduction of a directed graph, SIAM. J. Computing, 1(1972)) have shown that transitive reduction of each graph can be computed in polynomial time (in O(|V||E|))
GRAPH ENCODABILITY Given directed graphs (V,A) and (V',A') is the an encoding of the first graph into the second one: a one to one function V->V', such tht for each edge \in A, there exists a dirfected path in A'
This problem is solvable in PTIME for undirected graphs. The problem is solvable for directed graphs in PTIME if the first graph (which must be encoded) is fixed. In general case, the problem is NP-hard for for directed graphs. In case if egdes are weighted then even undirected version of the problem is NP-hard (A. Rosenberg, Data Encoding and Their Costs, Acta Informatica 1978, referenced by Johnson, The NP-completeness column, J. of Algorithms, 1982)

Sunday, February 19, 2006

how hard is it

One of the branches of the algorithmic graph theory is a set of problems alike what is the minimum nuber of probes of the adjacency matrix of the graph is required in the worst case, in order to determine whether the graph posseses a given property P.
Of course, the only nontrivial properties are considered.
The upper limit of the probes is the size of adjacency matrix (n(n-1)/2) since the upper diagonal is necessary only.
Bollobas and Eldridge proved that for all nontrivial graph properties the minimum number of probes is \geq 2n-4, linear in the number of vertices of the graph.
A good example of the linear property is existence of a sink node (a node with in-degree=n-1 and out-degree=0) in a directed graph (loop-free).
Smith-Thomas proved that the existence of the sink is in 3n-log n - 3

An elusive property P (evasive property) is a property such that all essential entries of the adjacency matrix must be probed to establish P
Among evasive properties are: beeing a tree, beeing connected, beeing biconnected, beeing planar, beeing k-chromatic

What is a difference between existence of a sink and beeing connected?
Rivest and Vuillemin proved that for every nontrivial, monotone property of graphs, the minimum number of probes is \geq n^2/16
The conjecture that monotone properties are "hard" (evasive) is known as the Aanderaa-Rosenberg conjecture.
Kahn improved Rivest and Vuilemenn result up to n^2/4 for undirected graphs and King improved it to n^2/2 for directed graphs.

  1. L. Lovasz and N. Young , Lecture Notes on Evasiveness of Graph Properties
  2. Graph Algorithms by J. van Leeuwen in Handbook of Theoretical Computer Science, vol. A

Saturday, February 18, 2006

lovely things

I like beautiful reductions in the complexity theory.
Reduction technique looks like a particular kind of programing.
Programming: given a language L (Python, C++, Scheme), and a problem P (input of census information, train scheduling, web information retrieval), you have to produce a program P in L, which solves P.
CT Given a (often very simple) "programming language" L: Clique problem over graphs, Algebraic equations over GF[2], Finite State Automaton Non-Equivalence, a problem P (which is usually has much more involved definition then language L) , you have to produce a reduction R of P to L.

Many beatiful reduction are given in the following articles:

  1. Some Simplified NP-Complete Graph Problems, M. R. Garey, D. S. Johnson and L. Stockmeyer, Theoretical Comp. Sci., 1 (1976), 237-267.
  2. T.J. Schäfer, Complexity of some two-person perfect-information games,
    J.Comput. Syst. Science, 16 (1978) 185--225.

A few days ago Fortnow reminded about very beautiful results on alternating TM

Friday, February 17, 2006

Scripting for Mobile Phones

Python is ported to Symbion OS
See Matt Croydon's Res. Guide
There are a few ports of Python to Symbion: Nokia's, Pys60,
In particular, the Nokia's port includes
  1. Camera, Calendar, Telephone API
  2. Access to disk space and free memory
  3. GPRS, BlueTooth
  4. SMS
  5. GUI widgets, scalable UI
and many other features

Wrappers for Google and Amazon sites, as well many other applications are available

Thursday, February 09, 2006

Bill Gasarsh posted a list of the surpursing results in the theory:

Which are my 10 favorite ones?

  1. Matrix Multiplication in O(n2.87) time (less than n3 !!) (Strassen 1969).
  2. Equivalence problem for Regular Expressions with Squaring is EXP-SPACE complete (Meyer, Stockmeyer FOCS72). A `natural' problem proven to NOT be in P!
  3. P vs NP cannot be solved by techniques that relatives (Baker, Gill, Solovay 1975).
  4. Permanent is #P comple
  5. Shortest Path problem on planar graphs BETTER THAN O(n log n) (Fredersickson, 1983, FOCS).te. (Valiant 1979)
  6. Bit commitment⇒SAT in Zero Knowledge (Brassard,Crepeau FOCS86, Goldreich,Micali,Wigderson CRYPTO86, JACM91)
  7. Fixed Parameter Tractability material
  8. PH in P#P. (Toda FOCS89, SICOMP91)
  9. Connection between PCP and Approximation
  10. Grover's Quantum O(n0.5) search algorithm (STOC 1996, Phy Review Letters 1997

Friday, October 21, 2005

Date Thursday, October 20, 5:20-7:30 pm
the first tutorial of

5-hour tutorial
"PCP theorem, Interactive Proofs and Inapproximability of
Optimization Problems"

This talk would be more a tutorial, then a seminar. Full proofs, explanations and background materials will be given. I have given it once already in UoQ and this tutorial is quite ready in sense of quality of presentation.

PCP theorem is considered to be one of the major development in TCS in 90s. Perhaps, since 1992 every STOC and FOCS conference has at least one article strongly related to the PCP theorem.

Basically the PCP theorem provides a new characterization of NP class as a class of languages, a membership problem of which can be solved by the logarithmic number of random bits and the constant number of proof bits. (sounds miracle? The constant number of bits is actually needed instead of polynomial to proof a membership of the word.)
The original proof based on algebraic technique is quite complicated (around 20 pages of proof). The new proof based on combinatorial techniques (zig-zag products) was discovered this year by Irit Dinur.
Using other version of interactive proof systems we can capture PSPACE and NEXPTIME class and investigate some interesting properties of them.

NPO means NP optimization problem. NPO is a search version of NP class which is defined to be a class of decission problems. The NPO problem required to find a minimal (or maximal) solution among given one. For example, a Traveling Salesman Problem expressed as a search problem (Find a minimal lengh route in the graph which visit all vertices ones) would be an example of NPO problem. Usually NPO problem, a search version of NP decission problems belongs to FP^NP or FP^NP(log n) - hard classes indeed.
Since assuming P is not equal NP, there does not exist a polynomial algorithm to solve EXACTLY NP-hard-problems, given a NP-hard NPO problem it would be natural to ask, if a polynomial-time algorithm exists which can solve a problem with some approximation quality.
It was discovered that there exists a few approximation classes of NPO problems, which put certain bounds on approximation quality of algorithms for those problems. PCP theorem is useful to prove in-approximation properties of NPO problems.

Besides PCP theorem other important for theoretical computer science techniques will be introduced. Expanders are graphs which posses good expansion properties. (n,d,c) expander is a d-regular undirected graph with n vertices, such that if a set of vertices S chooses such that |S| < |n|/2, then a cardinality of a set of vertices adjacent to S is not less then c|S| Expanders are useful in computational complexity, error-correcting codes, communication networks.
Using expanders and PCP theorem we will prove n^\epsilon-inapproximability of MAX CLIQUE problem and NP-hardness of MAX 3SAT5 (SAT with 3 variables per clause and each variable is used not more then 5 times)

Thursday, October 06, 2005



if you are interested, I can give
4-hour tutorial (2 and 2) on "PCP theorem, Interactive Proofs and Inapproximability of Optimization Problems"

This talk would be more a tutorial, then a seminar. Full proofs, explanations and background materials will be given. I have given it once already in UoQ and this tutorial is quite ready in sense of quality of presentation.

PCP theorem is considered to be one of the major development in TCS in 90s. Perhaps, since 1992 every STOC and FOCS conference has at least one article strongly related to the PCP theorem.

Basically the PCP theorem provides a new characterization of NP class as a class of languages, a membership problem of which can be solved by the logarithmic number of random bits and the constant number of proof bits. (sounds miracle? The constant number of bits is actually needed instead of polynomial to proof a membership of the word.)
The original proof based on algebraic technique is quite complicated (around 20 pages of proof). The new proof based on combinatorial techniques (zig-zag products) was discovered this year by Irit Dinur.
Using other version of interactive proof systems we can capture PSPACE and NEXPTIME class and investigate some interesting properties of them.

NPO means NP optimization problem. NPO is a search version of NP class which is defined to be a class of decission problems. The NPO problem required to find a minimal (or maximal) solution among given one. For example, a Traveling Salesman Problem expressed as a search problem (Find a minimal lengh route in the graph which visit all vertices ones) would be an example of NPO problem. Usually NPO problem, a search version of NP decission problems belongs to FP^NP or FP^NP(log n) - hard classes indeed.
Since assuming P is not equal NP, there does not exist a polynomial algorithm to solve EXACTLY NP-hard-problems, given a NP-hard NPO problem it would be natural to ask, if a polynomial-time algorithm exists which can solve a problem with some approximation quality.
It was discovered that there exists a few approximation classes of NPO problems, which put certain bounds on approximation quality of algorithms for those problems.
PCP theorem is useful to prove in-approximation properties of NPO problems.

Monday, September 19, 2005


I am giving talks on Thursday 22, 4pm, Tuesday 27, Thursday 29, Physics Department, the University of Queensland

Part A
a A proof of NP = PCP(log n, 1)
A proof that NP is equal to a set of languages accepted by probalistic turing machines with log n random bits and the constant number of queries to an oracle which has access to a proof. A proof is based on arithmetization and low-degree techniques.
b Low Degree Techniques
a1 LKFN tests (Lund, Fortnow, Karloff, Nisan)
LKFN test method are necesary to prove PCP theorem. It played a role to prove other fundamental results: IP = PSPACE, and MIP = NEXPTIME.
Assume that f: F^m -> G be a polynomial of degree at most d in every variable, with F beeing a finite field and G a field extension of F. Suppose H \subseteq F, H is an arbitrary subset of F
Using LKFN method we can check if \sum_{u \in H^m} f(u) = 0 efficiently (with some limitations on the number of random bits and the number of bits read from a "proof") such that
if sum = 0 then there exist a proof such that LKFN test output accepts with probability 1
if sum \not = 0, then LKFN test reject with probability at least 3/4 for all proofs
a2 Low degree tests
Tests developed by Arora, Lund, Motwani, sudan, Szegedy for checking if given function is \delta-close to a polynomial of degree at most d. (Proof by Hougardy, Promel, Steger)
3 Applications of PCP theorem
Non-approxmability of MAX-SNP hard problems, coloring and clique problems
Appendix Optimized version of PCP theorem

Part B. Quantum PCP theorem (result of Ran Raz presented at FOCS 2005)

Thursday, August 04, 2005

Use of PCP theorem

As it was proved by Khanna, Linial and Safra in 2000, it is NP-hard to color 3-colorable graph using 4-colors. That proof used PCP-theorem.
Last year they found an elegant proof of the same result which does not rely on PCP theorem. See
V. Guruswami, S. Khanna, On the hardness of 4-coloring of 3-colorable graph, SIAM Journal on Discrete Mathematics, 18, 1 (2004)
But PCP theorem is still needed to prove NP-hardness of 4-colorability of 3-colorable graph for bounded degree graphs.
I have found that a classical book
A romance of many dimensions
With Illustrations by the Author, A SQUARE
(Edwin A. Abbott 1838-1926)

is published online.
As far as I remember, the first time I read about Flatland in one of Martin Gardner's books,
Another good classical book published online is
"Symmetry Groups and Their Applications
by Willard Miller
Academic Press, New York, 1972 "
(out of print, published online by its author). The book is written from application point of view (mostly application to physics) and it contains quite a good material on the theory of finite groups.
It costs 374 USD at on August 4 2005

Thursday, July 07, 2005

Lecture on Tilings given by Prof. Richard Stanley

I have found slides of a public lecture on tilings given by Richard Stanley on July 16, 2004.
Richard Stanley who is currently the Norman Levinson Professor of Applied Mathematics at M.I.T. was named Clay Senior Scholar at the Park City Mathematics Institute (IAS/PCMI) program on Geometric Combinatorics (July 2004).
Perhaps, almost everybody interested in combinatorics read his "Enumerative Combinatorics" vo. 1 and 2 (at least, come chapters of it).
Now he works on a book "Hyperplane arrangements". See, a draft "An Introduction to Hyperplane Arrangements"(46 pages) or Stanley's lecture notes on Hyperplane Arrangements (22 pages)

Our article is accepted to DBPL-2005

Today I have got a notification that our article was accepted to DBPL-2005.
DBPL is one of the top level symposiums in database theory.
The article is
Leopoldo Bertossi, Loreto Bravo, Enrico Franconi and Andrei Lopatenko, "Complexity and Approximation of Fixing Numerical Attributes in Databases Under Integrity Constraints"
The CORR(arXiv) version is here cs.DB/0503032.
Acceptance rate is around 26% (17 out of 63). Taking into account DBPL PC and quality of submissions(I have reviewed some of them) + impact factor of DBPL, quality of papers published in previous years, I am quite proud by this result.
At this point I see directions to extend our research pretty well.

Tuesday, July 05, 2005

Post Lattice and Complexity Theory

UNIBZ KRDB Internal Seminars
According to requests of people, attended KRDB seminar on Monday, June 3d, I decided to make next KRDB seminar totally devoted to applications of "post lattice theory" (dichotomy results for SAT and circuit problems, tractable case of conjunctive query containment, expressive power of constraints).

Time: Wednesday, July 20th, 2005, 14:00
Speaker: Andrei Lopatenko
Topic: Post Lattice and Complexity Theory
Abstract: Exposition of application of a theory of boolean functions to complexity theory. The emphasize is on obtaining dichotomy results.
The following theorems and their proofs will be presented in full details :
1) [RW00] The complexity of CIRCUIT-SAT(B) problem (the problem is NP-hard if S_1 \subseteq [b], in all other cases it is PTIME solvable)
1a) Corollary of 1) LEwis theorem: SAT(B) is NP-hard if S_1 \subseteq B; in all other cases it is in PTIME

2) [RW00] CVP(B) is in NC if B \subseteq V \cup L \cup E; in all other cases it is PTIME-hard

3) [KV00] Schaefer Theorem (exposition will be given through algebraic approach proposed in KV00): CSP(B) is in PTIME, if B is a Schaefer structure; in all other cases CSP(B) is NP-hard

3) Corollary of 3). [Kv00] Testing whether a two-atom conjunctive query is contained in a conjunctive query can be done in PTIME

4)[JCS99] Other characterizations of tractable fragments of SAT

CIRCUIT-SAT is Circuit Satisfiability
CVP is Circuit Value Problem
CSP is Constraint Satisfaction PRoblem

RW00 Reigt Wagner, The complexity of problems defined by Boolean circuits, TR,
KV00 P. Kolaitis, M. Vardi, Conjunctive query containment and constraint satisfaction, JCSS 2000
JCG99 Jeavons Cohen Gyssens How to determine an expressive power of constraints, Constraints 1999
BCRV42 Bohler, Creignou, Reith, Vollmer Playing with Boolean Blocks: Post's Lattice with Application to Complexity Theory, SIGACT NEWS Complexity Theory Column 42
BCRV43 Bohler, Creignou, Reith, Vollmer Playing with Boolean Blocks: Constraint Satisfaction Problems, SIGACT NEWS Complexity Theory Column 43

Monday, July 04, 2005

Today I gave a talk (seminar) on boolean co-clones, Galois connections between clones and co-clones, extension of theory of boolean functions on k-ean domain.
I'll put slides online in a few days.
Most people attending seminar were interested in application of theory boolean clones to prove dichotomy results (like Reith-Wegener theorem, Schaefer Dichotomy, Dalmau, Bulatov results). I think I'll prepare two hour tutorial on dichotomy results only.

Saturday, July 02, 2005


I have updated my home page
Links to google map images of my hometown and Bolzano are added

Monday, June 27, 2005