New patch with the proposed workaround below. I also slightly change the _M_merge_multi implementation so that if the new hash code computation raise an exception the node is simply not extracted rather than extracted and then released. This way, if it takes place on the 1st moved node the _GLIBCXX_DEBUG mode won't try to invalidate anything because the source size won't have changed. Ok to commit ? François On 16/10/21 4:52 pm, Jonathan Wakely wrote: > > > On Sat, 16 Oct 2021, 14:49 François Dumont via Libstdc++, > > wrote: > > Hi > >      Here is the new proposal. My only concern is that we are also > using > hash or equal_to functors in the guard destructor. > > > > Can we catch any exception there, invalidate all iterators, and not > rethrow the exception? > > >      I am going to enhance merge normal implementation to make use of > the cached hash code when hash functors are the same between the > source > and destination of nodes. Maybe I'll be able to make use of it in > Debug > implementation too. > > François > > > On 14/10/21 10:23 am, Jonathan Wakely wrote: > > On Wed, 13 Oct 2021 at 18:10, François Dumont via Libstdc++ > > > wrote: > >> Hi > >> > >>       libstdc++: [_GLIBCXX_DEBUG] Implement unordered container > merge > >> > >>       The _GLIBCXX_DEBUG unordered containers need a dedicated > merge > >> implementation > >>       so that any existing iterator on the transfered nodes is > properly > >> invalidated. > >> > >>       Add typedef/using declaration for everything used as-is > from normal > >> implementation. > >> > >>       libstdc++-v3/ChangeLog: > >> > >>               * include/debug/safe_container.h > (_Safe_container<>): Make > >> all methods > >>               protected. > >>               * include/debug/safe_unordered_container.h > >>  (_Safe_unordered_container<>::_M_invalide_all): Make public. > >>  (_Safe_unordered_container<>::_M_invalide_if): Likewise. > >> (_Safe_unordered_container<>::_M_invalide_local_if): Likewise. > >>               * include/debug/unordered_map > >>  (unordered_map<>::mapped_type, pointer, const_pointer): New > >> typedef. > >>               (unordered_map<>::reference, const_reference, > >> difference_type): New typedef. > >>  (unordered_map<>::get_allocator, empty, size, max_size): > >> Add usings. > >>  (unordered_map<>::bucket_count, max_bucket_count, bucket): > >> Add usings. > >>  (unordered_map<>::hash_function, key_equal, count, > >> contains): Add usings. > >>               (unordered_map<>::operator[], at, rehash, > reserve): Add usings. > >>               (unordered_map<>::merge): New. > >>  (unordered_multimap<>::mapped_type, pointer, > >> const_pointer): New typedef. > >>  (unordered_multimap<>::reference, const_reference, > >> difference_type): New typedef. > >>  (unordered_multimap<>::get_allocator, empty, size, > >> max_size): Add usings. > >>  (unordered_multimap<>::bucket_count, max_bucket_count, > >> bucket): Add usings. > >>  (unordered_multimap<>::hash_function, key_equal, count, > >> contains): Add usings. > >>  (unordered_multimap<>::rehash, reserve): Add usings. > >>  (unordered_multimap<>::merge): New. > >>               * include/debug/unordered_set > >>  (unordered_set<>::mapped_type, pointer, const_pointer): New > >> typedef. > >>               (unordered_set<>::reference, const_reference, > >> difference_type): New typedef. > >>  (unordered_set<>::get_allocator, empty, size, max_size): > >> Add usings. > >>  (unordered_set<>::bucket_count, max_bucket_count, bucket): > >> Add usings. > >>  (unordered_set<>::hash_function, key_equal, count, > >> contains): Add usings. > >>               (unordered_set<>::rehash, reserve): Add usings. > >>               (unordered_set<>::merge): New. > >>  (unordered_multiset<>::mapped_type, pointer, > >> const_pointer): New typedef. > >>  (unordered_multiset<>::reference, const_reference, > >> difference_type): New typedef. > >>  (unordered_multiset<>::get_allocator, empty, size, > >> max_size): Add usings. > >>  (unordered_multiset<>::bucket_count, max_bucket_count, > >> bucket): Add usings. > >>  (unordered_multiset<>::hash_function, key_equal, count, > >> contains): Add usings. > >>  (unordered_multiset<>::rehash, reserve): Add usings. > >>  (unordered_multiset<>::merge): New. > >>               * > >> testsuite/23_containers/unordered_map/debug/merge1_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_map/debug/merge2_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_map/debug/merge3_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_map/debug/merge4_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_multimap/debug/merge1_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multimap/debug/merge2_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multimap/debug/merge3_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multimap/debug/merge4_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multiset/debug/merge1_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multiset/debug/merge2_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multiset/debug/merge3_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_multiset/debug/merge4_neg.cc: > New test. > >>               * > >> testsuite/23_containers/unordered_set/debug/merge1_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_set/debug/merge2_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_set/debug/merge3_neg.cc: New > test. > >>               * > >> testsuite/23_containers/unordered_set/debug/merge4_neg.cc: New > test. > >>               * testsuite/util/testsuite_abi.h: > [_GLIBCXX_DEBUG] Use > >> normal unordered container implementation. > >> > >> Tested under Linux x86_64. > >> > >> Ok to commit ? > > Yes, thanks. But ... > > > > This looks like an improvement over what we have now, but not 100% > > correct. The merge functions can exit via exception (if any hash > > function or equality predicate throws), and if that happens the safe > > iterators will not get invalidated. I think we need to call > > _Base::merge in a try-block, and do the iterator invalidation > whether > > we return normally or via an exception. > > > > Something like: > > > >    template > >      void > >      merge(unordered_set<_Value, _H2, _P2, _Alloc>& __source) > >      { > >        struct _Guard > >        { > >          _Guard(unordered_set& __source) noexcept > >          : _M_source(__source), _M_size(__source.size()) > >          { } > > > >          ~_Guard() > >          { > >            const size_type __size = _M_source.size(); > >            if (__size != _M_size) > >              { > >                if (__size == 0) > >                  _M_source._M_invalidate_all(); > >                else > >                  { > >                    auto __pred = [&__source](auto __it) > >                                  { return __source.count(*__it) > == 0; }; > >                    __source._M_invalidate_if(__pred); > > __source._M_invalidate_local_if(__pred); > >                  } > >              } > >          } > > > >          _Guard(const _Guard&) = delete; > > > >          unordered_set& _M_source; > >          const size_type _M_size; > >        }; > >        _Guard __guard(__source); > >        _Base::merge(__source._M_base()); > >      } > > >