When range-of_stmt is invoked on a statement, out-of-date updating is keyed off the timestamp on the definition. When the def is calculated, and its global value stored, a timestamp is created for the cache.  if range_of_stmt is invoked again, the timestamp of the uses are compared to the definitions, and if they are older, we simply use the cache value. If one of the uses is newer than the definition, that means an input may have changed, and we will recalculate the definition. If this new value is different, we propagate this new value to any subsequent cache entries In the case of a gcond, there is no LHS, so we ahev no way to determine if anything might be out of date or need updating. Until now, we just did nothing except calculate the branch... any cache entires in the following blocks were never updated.  In this PR, we later determines the b_4 has a value of 0 instead of [0,1] , which would then change the value of c in subsequent blocks. This patch triggers a re-evaluation of all exports from a block when range_of_stmt is invoked on a gcond.  This isnt quite as bad as it seems because:   a) range_of_stmt on a stmt without a LHS  is never invoked from within the internal API, so its only a client like VRP which can make this call   b) The cache propagator is already smart enough to only propagate a value to the following blocks if       1 - there is already an on-entry cache value, otherwise its skipped       2 - the value actually changed. The net result is that this change has very minimal impact on the compile time performance of the ranger VRP pass.. In the order of 0.5%.  It also now catches a few things we use to miss. Bootstrapped on x86_64-pc-linux-gnu with no regressions.  Pushed. Andrew