pax_global_header00006660000000000000000000000064151037000470014507gustar00rootroot0000000000000052 comment=e43e581cc18c8f1534eab0d9b26308b02e6c6568 golang-github-orcaman-concurrent-map-2.0.1/000077500000000000000000000000001510370004700206075ustar00rootroot00000000000000golang-github-orcaman-concurrent-map-2.0.1/.gitignore000066400000000000000000000000071510370004700225740ustar00rootroot00000000000000.idea/ golang-github-orcaman-concurrent-map-2.0.1/.travis.yml000066400000000000000000000017351510370004700227260ustar00rootroot00000000000000# This is a weird way of telling Travis to use the fast container-based test # runner instead of the slow VM-based runner. sudo: false language: go # You don't need to test on very old version of the Go compiler. It's the user's # responsibility to keep their compilers up to date. go: - 1.18 # Only clone the most recent commit. git: depth: 1 # Skip the install step. Don't `go get` dependencies. Only build with the code # in vendor/ install: true # Don't email me the results of the test runs. notifications: email: false before_script: - go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest # script always runs to completion (set +e). If we have linter issues AND a # failing test, we want to see both. Configure golangci-lint with a # .golangci.yml file at the top level of your repo. script: - golangci-lint run # run a bunch of code checkers/linters in parallel - go test -v -race ./... # Run all the tests with the race detector enabled golang-github-orcaman-concurrent-map-2.0.1/LICENSE000066400000000000000000000020661510370004700216200ustar00rootroot00000000000000The MIT License (MIT) Copyright (c) 2014 streamrail Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. golang-github-orcaman-concurrent-map-2.0.1/README-zh.md000066400000000000000000000047501510370004700225130ustar00rootroot00000000000000# concurrent map [![Build Status](https://travis-ci.com/orcaman/concurrent-map.svg?branch=master)](https://travis-ci.com/orcaman/concurrent-map) 正如 [这里](http://golang.org/doc/faq#atomic_maps) 和 [这里](http://blog.golang.org/go-maps-in-action)所描述的, Go语言原生的`map`类型并不支持并发读写。`concurrent-map`提供了一种高性能的解决方案:通过对内部`map`进行分片,降低锁粒度,从而达到最少的锁等待时间(锁冲突) 在Go 1.9之前,go语言标准库中并没有实现并发`map`。在Go 1.9中,引入了`sync.Map`。新的`sync.Map`与此`concurrent-map`有几个关键区别。标准库中的`sync.Map`是专为`append-only`场景设计的。因此,如果您想将`Map`用于一个类似内存数据库,那么使用我们的版本可能会受益。你可以在golang repo上读到更多,[这里](https://github.com/golang/go/issues/21035) and [这里](https://stackoverflow.com/questions/11063473/map-with-concurrent-access) ***译注:`sync.Map`在读多写少性能比较好,否则并发性能很差*** ## 用法 导入包: ```go import ( "github.com/orcaman/concurrent-map/v2" ) ``` ```bash go get "github.com/orcaman/concurrent-map/v2" ``` 现在包被导入到了`cmap`命名空间下 ***译注:通常包的限定前缀(命名空间)是和目录名一致的,但是这个包有点典型😂,不一致!!!所以用的时候注意*** ## 示例 ```go // 创建一个新的 map. m := cmap.New[string]() // 设置变量m一个键为“foo”值为“bar”键值对 m.Set("foo", "bar") // 从m中获取指定键值. bar, ok := m.Get("foo") // 删除键为“foo”的项 m.Remove("foo") ``` 更多使用示例请查看`concurrent_map_test.go`. 运行测试: ```bash go test "github.com/orcaman/concurrent-map/v2" ``` ## 贡献说明 我们非常欢迎大家的贡献。如欲合并贡献,请遵循以下指引: - 新建一个issue,并且叙述为什么这么做(解决一个bug,增加一个功能,等等) - 根据核心团队对上述问题的反馈,提交一个PR,描述变更并链接到该问题。 - 新代码必须具有测试覆盖率。 - 如果代码是关于性能问题的,则必须在流程中包括基准测试(无论是在问题中还是在PR中)。 - 一般来说,我们希望`concurrent-map`尽可能简单,且与原生的`map`有相似的操作。当你新建issue时请注意这一点。 ## 许可证 MIT (see [LICENSE](https://github.com/orcaman/concurrent-map/blob/master/LICENSE) file) golang-github-orcaman-concurrent-map-2.0.1/README.md000066400000000000000000000045651510370004700221000ustar00rootroot00000000000000# concurrent map [![Build Status](https://travis-ci.com/orcaman/concurrent-map.svg?branch=master)](https://travis-ci.com/orcaman/concurrent-map) As explained [here](http://golang.org/doc/faq#atomic_maps) and [here](http://blog.golang.org/go-maps-in-action), the `map` type in Go doesn't support concurrent reads and writes. `concurrent-map` provides a high-performance solution to this by sharding the map with minimal time spent waiting for locks. Prior to Go 1.9, there was no concurrent map implementation in the stdlib. In Go 1.9, `sync.Map` was introduced. The new `sync.Map` has a few key differences from this map. The stdlib `sync.Map` is designed for append-only scenarios. So if you want to use the map for something more like in-memory db, you might benefit from using our version. You can read more about it in the golang repo, for example [here](https://github.com/golang/go/issues/21035) and [here](https://stackoverflow.com/questions/11063473/map-with-concurrent-access) ## usage Import the package: ```go import ( "github.com/orcaman/concurrent-map/v2" ) ``` ```bash go get "github.com/orcaman/concurrent-map/v2" ``` The package is now imported under the "cmap" namespace. ## example ```go // Create a new map. m := cmap.New[string]() // Sets item within map, sets "bar" under key "foo" m.Set("foo", "bar") // Retrieve item from map. bar, ok := m.Get("foo") // Removes item under key "foo" m.Remove("foo") ``` For more examples have a look at concurrent_map_test.go. Running tests: ```bash go test "github.com/orcaman/concurrent-map/v2" ``` ## guidelines for contributing Contributions are highly welcome. In order for a contribution to be merged, please follow these guidelines: - Open an issue and describe what you are after (fixing a bug, adding an enhancement, etc.). - According to the core team's feedback on the above mentioned issue, submit a pull request, describing the changes and linking to the issue. - New code must have test coverage. - If the code is about performance issues, you must include benchmarks in the process (either in the issue or in the PR). - In general, we would like to keep `concurrent-map` as simple as possible and as similar to the native `map`. Please keep this in mind when opening issues. ## language - [中文说明](./README-zh.md) ## license MIT (see [LICENSE](https://github.com/orcaman/concurrent-map/blob/master/LICENSE) file) golang-github-orcaman-concurrent-map-2.0.1/concurrent_map.go000066400000000000000000000222721510370004700241620ustar00rootroot00000000000000package cmap import ( "encoding/json" "fmt" "sync" ) var SHARD_COUNT = 32 type Stringer interface { fmt.Stringer comparable } // A "thread" safe map of type string:Anything. // To avoid lock bottlenecks this map is dived to several (SHARD_COUNT) map shards. type ConcurrentMap[K comparable, V any] struct { shards []*ConcurrentMapShared[K, V] sharding func(key K) uint32 } // A "thread" safe string to anything map. type ConcurrentMapShared[K comparable, V any] struct { items map[K]V sync.RWMutex // Read Write mutex, guards access to internal map. } func create[K comparable, V any](sharding func(key K) uint32) ConcurrentMap[K, V] { m := ConcurrentMap[K, V]{ sharding: sharding, shards: make([]*ConcurrentMapShared[K, V], SHARD_COUNT), } for i := 0; i < SHARD_COUNT; i++ { m.shards[i] = &ConcurrentMapShared[K, V]{items: make(map[K]V)} } return m } // Creates a new concurrent map. func New[V any]() ConcurrentMap[string, V] { return create[string, V](fnv32) } // Creates a new concurrent map. func NewStringer[K Stringer, V any]() ConcurrentMap[K, V] { return create[K, V](strfnv32[K]) } // Creates a new concurrent map. func NewWithCustomShardingFunction[K comparable, V any](sharding func(key K) uint32) ConcurrentMap[K, V] { return create[K, V](sharding) } // GetShard returns shard under given key func (m ConcurrentMap[K, V]) GetShard(key K) *ConcurrentMapShared[K, V] { return m.shards[uint(m.sharding(key))%uint(SHARD_COUNT)] } func (m ConcurrentMap[K, V]) MSet(data map[K]V) { for key, value := range data { shard := m.GetShard(key) shard.Lock() shard.items[key] = value shard.Unlock() } } // Sets the given value under the specified key. func (m ConcurrentMap[K, V]) Set(key K, value V) { // Get map shard. shard := m.GetShard(key) shard.Lock() shard.items[key] = value shard.Unlock() } // Callback to return new element to be inserted into the map // It is called while lock is held, therefore it MUST NOT // try to access other keys in same map, as it can lead to deadlock since // Go sync.RWLock is not reentrant type UpsertCb[V any] func(exist bool, valueInMap V, newValue V) V // Insert or Update - updates existing element or inserts a new one using UpsertCb func (m ConcurrentMap[K, V]) Upsert(key K, value V, cb UpsertCb[V]) (res V) { shard := m.GetShard(key) shard.Lock() v, ok := shard.items[key] res = cb(ok, v, value) shard.items[key] = res shard.Unlock() return res } // Sets the given value under the specified key if no value was associated with it. func (m ConcurrentMap[K, V]) SetIfAbsent(key K, value V) bool { // Get map shard. shard := m.GetShard(key) shard.Lock() _, ok := shard.items[key] if !ok { shard.items[key] = value } shard.Unlock() return !ok } // Get retrieves an element from map under given key. func (m ConcurrentMap[K, V]) Get(key K) (V, bool) { // Get shard shard := m.GetShard(key) shard.RLock() // Get item from shard. val, ok := shard.items[key] shard.RUnlock() return val, ok } // Count returns the number of elements within the map. func (m ConcurrentMap[K, V]) Count() int { count := 0 for i := 0; i < SHARD_COUNT; i++ { shard := m.shards[i] shard.RLock() count += len(shard.items) shard.RUnlock() } return count } // Looks up an item under specified key func (m ConcurrentMap[K, V]) Has(key K) bool { // Get shard shard := m.GetShard(key) shard.RLock() // See if element is within shard. _, ok := shard.items[key] shard.RUnlock() return ok } // Remove removes an element from the map. func (m ConcurrentMap[K, V]) Remove(key K) { // Try to get shard. shard := m.GetShard(key) shard.Lock() delete(shard.items, key) shard.Unlock() } // RemoveCb is a callback executed in a map.RemoveCb() call, while Lock is held // If returns true, the element will be removed from the map type RemoveCb[K any, V any] func(key K, v V, exists bool) bool // RemoveCb locks the shard containing the key, retrieves its current value and calls the callback with those params // If callback returns true and element exists, it will remove it from the map // Returns the value returned by the callback (even if element was not present in the map) func (m ConcurrentMap[K, V]) RemoveCb(key K, cb RemoveCb[K, V]) bool { // Try to get shard. shard := m.GetShard(key) shard.Lock() v, ok := shard.items[key] remove := cb(key, v, ok) if remove && ok { delete(shard.items, key) } shard.Unlock() return remove } // Pop removes an element from the map and returns it func (m ConcurrentMap[K, V]) Pop(key K) (v V, exists bool) { // Try to get shard. shard := m.GetShard(key) shard.Lock() v, exists = shard.items[key] delete(shard.items, key) shard.Unlock() return v, exists } // IsEmpty checks if map is empty. func (m ConcurrentMap[K, V]) IsEmpty() bool { return m.Count() == 0 } // Used by the Iter & IterBuffered functions to wrap two variables together over a channel, type Tuple[K comparable, V any] struct { Key K Val V } // Iter returns an iterator which could be used in a for range loop. // // Deprecated: using IterBuffered() will get a better performence func (m ConcurrentMap[K, V]) Iter() <-chan Tuple[K, V] { chans := snapshot(m) ch := make(chan Tuple[K, V]) go fanIn(chans, ch) return ch } // IterBuffered returns a buffered iterator which could be used in a for range loop. func (m ConcurrentMap[K, V]) IterBuffered() <-chan Tuple[K, V] { chans := snapshot(m) total := 0 for _, c := range chans { total += cap(c) } ch := make(chan Tuple[K, V], total) go fanIn(chans, ch) return ch } // Clear removes all items from map. func (m ConcurrentMap[K, V]) Clear() { for item := range m.IterBuffered() { m.Remove(item.Key) } } // Returns a array of channels that contains elements in each shard, // which likely takes a snapshot of `m`. // It returns once the size of each buffered channel is determined, // before all the channels are populated using goroutines. func snapshot[K comparable, V any](m ConcurrentMap[K, V]) (chans []chan Tuple[K, V]) { //When you access map items before initializing. if len(m.shards) == 0 { panic(`cmap.ConcurrentMap is not initialized. Should run New() before usage.`) } chans = make([]chan Tuple[K, V], SHARD_COUNT) wg := sync.WaitGroup{} wg.Add(SHARD_COUNT) // Foreach shard. for index, shard := range m.shards { go func(index int, shard *ConcurrentMapShared[K, V]) { // Foreach key, value pair. shard.RLock() chans[index] = make(chan Tuple[K, V], len(shard.items)) wg.Done() for key, val := range shard.items { chans[index] <- Tuple[K, V]{key, val} } shard.RUnlock() close(chans[index]) }(index, shard) } wg.Wait() return chans } // fanIn reads elements from channels `chans` into channel `out` func fanIn[K comparable, V any](chans []chan Tuple[K, V], out chan Tuple[K, V]) { wg := sync.WaitGroup{} wg.Add(len(chans)) for _, ch := range chans { go func(ch chan Tuple[K, V]) { for t := range ch { out <- t } wg.Done() }(ch) } wg.Wait() close(out) } // Items returns all items as map[string]V func (m ConcurrentMap[K, V]) Items() map[K]V { tmp := make(map[K]V) // Insert items to temporary map. for item := range m.IterBuffered() { tmp[item.Key] = item.Val } return tmp } // Iterator callbacalled for every key,value found in // maps. RLock is held for all calls for a given shard // therefore callback sess consistent view of a shard, // but not across the shards type IterCb[K comparable, V any] func(key K, v V) // Callback based iterator, cheapest way to read // all elements in a map. func (m ConcurrentMap[K, V]) IterCb(fn IterCb[K, V]) { for idx := range m.shards { shard := (m.shards)[idx] shard.RLock() for key, value := range shard.items { fn(key, value) } shard.RUnlock() } } // Keys returns all keys as []string func (m ConcurrentMap[K, V]) Keys() []K { count := m.Count() ch := make(chan K, count) go func() { // Foreach shard. wg := sync.WaitGroup{} wg.Add(SHARD_COUNT) for _, shard := range m.shards { go func(shard *ConcurrentMapShared[K, V]) { // Foreach key, value pair. shard.RLock() for key := range shard.items { ch <- key } shard.RUnlock() wg.Done() }(shard) } wg.Wait() close(ch) }() // Generate keys keys := make([]K, 0, count) for k := range ch { keys = append(keys, k) } return keys } // Reviles ConcurrentMap "private" variables to json marshal. func (m ConcurrentMap[K, V]) MarshalJSON() ([]byte, error) { // Create a temporary map, which will hold all item spread across shards. tmp := make(map[K]V) // Insert items to temporary map. for item := range m.IterBuffered() { tmp[item.Key] = item.Val } return json.Marshal(tmp) } func strfnv32[K fmt.Stringer](key K) uint32 { return fnv32(key.String()) } func fnv32(key string) uint32 { hash := uint32(2166136261) const prime32 = uint32(16777619) keyLength := len(key) for i := 0; i < keyLength; i++ { hash *= prime32 hash ^= uint32(key[i]) } return hash } // Reverse process of Marshal. func (m *ConcurrentMap[K, V]) UnmarshalJSON(b []byte) (err error) { tmp := make(map[K]V) // Unmarshal into a single map. if err := json.Unmarshal(b, &tmp); err != nil { return err } // foreach key,value pair in temporary map insert into our concurrent map. for key, val := range tmp { m.Set(key, val) } return nil } golang-github-orcaman-concurrent-map-2.0.1/concurrent_map_bench_test.go000066400000000000000000000166411510370004700263630ustar00rootroot00000000000000package cmap import ( "strconv" "sync" "testing" ) type Integer int func (i Integer) String() string { return strconv.Itoa(int(i)) } func BenchmarkItems(b *testing.B) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 10000; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } for i := 0; i < b.N; i++ { m.Items() } } func BenchmarkItemsInteger(b *testing.B) { m := NewStringer[Integer, Animal]() // Insert 100 elements. for i := 0; i < 10000; i++ { m.Set((Integer)(i), Animal{strconv.Itoa(i)}) } for i := 0; i < b.N; i++ { m.Items() } } func directSharding(key uint32) uint32 { return key } func BenchmarkItemsInt(b *testing.B) { m := NewWithCustomShardingFunction[uint32, Animal](directSharding) // Insert 100 elements. for i := 0; i < 10000; i++ { m.Set((uint32)(i), Animal{strconv.Itoa(i)}) } for i := 0; i < b.N; i++ { m.Items() } } func BenchmarkMarshalJson(b *testing.B) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 10000; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } for i := 0; i < b.N; i++ { _, err := m.MarshalJSON() if err != nil { b.FailNow() } } } func BenchmarkStrconv(b *testing.B) { for i := 0; i < b.N; i++ { strconv.Itoa(i) } } func BenchmarkSingleInsertAbsent(b *testing.B) { m := New[string]() b.ResetTimer() for i := 0; i < b.N; i++ { m.Set(strconv.Itoa(i), "value") } } func BenchmarkSingleInsertAbsentSyncMap(b *testing.B) { var m sync.Map b.ResetTimer() for i := 0; i < b.N; i++ { m.Store(strconv.Itoa(i), "value") } } func BenchmarkSingleInsertPresent(b *testing.B) { m := New[string]() m.Set("key", "value") b.ResetTimer() for i := 0; i < b.N; i++ { m.Set("key", "value") } } func BenchmarkSingleInsertPresentSyncMap(b *testing.B) { var m sync.Map m.Store("key", "value") b.ResetTimer() for i := 0; i < b.N; i++ { m.Store("key", "value") } } func benchmarkMultiInsertDifferent(b *testing.B) { m := New[string]() finished := make(chan struct{}, b.N) _, set := GetSet(m, finished) b.ResetTimer() for i := 0; i < b.N; i++ { go set(strconv.Itoa(i), "value") } for i := 0; i < b.N; i++ { <-finished } } func BenchmarkMultiInsertDifferentSyncMap(b *testing.B) { var m sync.Map finished := make(chan struct{}, b.N) _, set := GetSetSyncMap[string, string](&m, finished) b.ResetTimer() for i := 0; i < b.N; i++ { go set(strconv.Itoa(i), "value") } for i := 0; i < b.N; i++ { <-finished } } func BenchmarkMultiInsertDifferent_1_Shard(b *testing.B) { runWithShards(benchmarkMultiInsertDifferent, b, 1) } func BenchmarkMultiInsertDifferent_16_Shard(b *testing.B) { runWithShards(benchmarkMultiInsertDifferent, b, 16) } func BenchmarkMultiInsertDifferent_32_Shard(b *testing.B) { runWithShards(benchmarkMultiInsertDifferent, b, 32) } func BenchmarkMultiInsertDifferent_256_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetDifferent, b, 256) } func BenchmarkMultiInsertSame(b *testing.B) { m := New[string]() finished := make(chan struct{}, b.N) _, set := GetSet(m, finished) m.Set("key", "value") b.ResetTimer() for i := 0; i < b.N; i++ { go set("key", "value") } for i := 0; i < b.N; i++ { <-finished } } func BenchmarkMultiInsertSameSyncMap(b *testing.B) { var m sync.Map finished := make(chan struct{}, b.N) _, set := GetSetSyncMap[string, string](&m, finished) m.Store("key", "value") b.ResetTimer() for i := 0; i < b.N; i++ { go set("key", "value") } for i := 0; i < b.N; i++ { <-finished } } func BenchmarkMultiGetSame(b *testing.B) { m := New[string]() finished := make(chan struct{}, b.N) get, _ := GetSet(m, finished) m.Set("key", "value") b.ResetTimer() for i := 0; i < b.N; i++ { go get("key", "value") } for i := 0; i < b.N; i++ { <-finished } } func BenchmarkMultiGetSameSyncMap(b *testing.B) { var m sync.Map finished := make(chan struct{}, b.N) get, _ := GetSetSyncMap[string, string](&m, finished) m.Store("key", "value") b.ResetTimer() for i := 0; i < b.N; i++ { go get("key", "value") } for i := 0; i < b.N; i++ { <-finished } } func benchmarkMultiGetSetDifferent(b *testing.B) { m := New[string]() finished := make(chan struct{}, 2*b.N) get, set := GetSet(m, finished) m.Set("-1", "value") b.ResetTimer() for i := 0; i < b.N; i++ { go set(strconv.Itoa(i-1), "value") go get(strconv.Itoa(i), "value") } for i := 0; i < 2*b.N; i++ { <-finished } } func BenchmarkMultiGetSetDifferentSyncMap(b *testing.B) { var m sync.Map finished := make(chan struct{}, 2*b.N) get, set := GetSetSyncMap[string, string](&m, finished) m.Store("-1", "value") b.ResetTimer() for i := 0; i < b.N; i++ { go set(strconv.Itoa(i-1), "value") go get(strconv.Itoa(i), "value") } for i := 0; i < 2*b.N; i++ { <-finished } } func BenchmarkMultiGetSetDifferent_1_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetDifferent, b, 1) } func BenchmarkMultiGetSetDifferent_16_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetDifferent, b, 16) } func BenchmarkMultiGetSetDifferent_32_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetDifferent, b, 32) } func BenchmarkMultiGetSetDifferent_256_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetDifferent, b, 256) } func benchmarkMultiGetSetBlock(b *testing.B) { m := New[string]() finished := make(chan struct{}, 2*b.N) get, set := GetSet(m, finished) for i := 0; i < b.N; i++ { m.Set(strconv.Itoa(i%100), "value") } b.ResetTimer() for i := 0; i < b.N; i++ { go set(strconv.Itoa(i%100), "value") go get(strconv.Itoa(i%100), "value") } for i := 0; i < 2*b.N; i++ { <-finished } } func BenchmarkMultiGetSetBlockSyncMap(b *testing.B) { var m sync.Map finished := make(chan struct{}, 2*b.N) get, set := GetSetSyncMap[string, string](&m, finished) for i := 0; i < b.N; i++ { m.Store(strconv.Itoa(i%100), "value") } b.ResetTimer() for i := 0; i < b.N; i++ { go set(strconv.Itoa(i%100), "value") go get(strconv.Itoa(i%100), "value") } for i := 0; i < 2*b.N; i++ { <-finished } } func BenchmarkMultiGetSetBlock_1_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetBlock, b, 1) } func BenchmarkMultiGetSetBlock_16_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetBlock, b, 16) } func BenchmarkMultiGetSetBlock_32_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetBlock, b, 32) } func BenchmarkMultiGetSetBlock_256_Shard(b *testing.B) { runWithShards(benchmarkMultiGetSetBlock, b, 256) } func GetSet[K comparable, V any](m ConcurrentMap[K, V], finished chan struct{}) (set func(key K, value V), get func(key K, value V)) { return func(key K, value V) { for i := 0; i < 10; i++ { m.Get(key) } finished <- struct{}{} }, func(key K, value V) { for i := 0; i < 10; i++ { m.Set(key, value) } finished <- struct{}{} } } func GetSetSyncMap[K comparable, V any](m *sync.Map, finished chan struct{}) (get func(key K, value V), set func(key K, value V)) { get = func(key K, value V) { for i := 0; i < 10; i++ { m.Load(key) } finished <- struct{}{} } set = func(key K, value V) { for i := 0; i < 10; i++ { m.Store(key, value) } finished <- struct{}{} } return } func runWithShards(bench func(b *testing.B), b *testing.B, shardsCount int) { oldShardsCount := SHARD_COUNT SHARD_COUNT = shardsCount bench(b) SHARD_COUNT = oldShardsCount } func BenchmarkKeys(b *testing.B) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 10000; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } for i := 0; i < b.N; i++ { m.Keys() } } golang-github-orcaman-concurrent-map-2.0.1/concurrent_map_test.go000066400000000000000000000271721510370004700252250ustar00rootroot00000000000000package cmap import ( "encoding/json" "hash/fnv" "sort" "strconv" "testing" ) type Animal struct { name string } func TestMapCreation(t *testing.T) { m := New[string]() if m.shards == nil { t.Error("map is null.") } if m.Count() != 0 { t.Error("new map should be empty.") } } func TestInsert(t *testing.T) { m := New[Animal]() elephant := Animal{"elephant"} monkey := Animal{"monkey"} m.Set("elephant", elephant) m.Set("monkey", monkey) if m.Count() != 2 { t.Error("map should contain exactly two elements.") } } func TestInsertAbsent(t *testing.T) { m := New[Animal]() elephant := Animal{"elephant"} monkey := Animal{"monkey"} m.SetIfAbsent("elephant", elephant) if ok := m.SetIfAbsent("elephant", monkey); ok { t.Error("map set a new value even the entry is already present") } } func TestGet(t *testing.T) { m := New[Animal]() // Get a missing element. val, ok := m.Get("Money") if ok == true { t.Error("ok should be false when item is missing from map.") } if (val != Animal{}) { t.Error("Missing values should return as null.") } elephant := Animal{"elephant"} m.Set("elephant", elephant) // Retrieve inserted element. elephant, ok = m.Get("elephant") if ok == false { t.Error("ok should be true for item stored within the map.") } if elephant.name != "elephant" { t.Error("item was modified.") } } func TestHas(t *testing.T) { m := New[Animal]() // Get a missing element. if m.Has("Money") == true { t.Error("element shouldn't exists") } elephant := Animal{"elephant"} m.Set("elephant", elephant) if m.Has("elephant") == false { t.Error("element exists, expecting Has to return True.") } } func TestRemove(t *testing.T) { m := New[Animal]() monkey := Animal{"monkey"} m.Set("monkey", monkey) m.Remove("monkey") if m.Count() != 0 { t.Error("Expecting count to be zero once item was removed.") } temp, ok := m.Get("monkey") if ok != false { t.Error("Expecting ok to be false for missing items.") } if (temp != Animal{}) { t.Error("Expecting item to be nil after its removal.") } // Remove a none existing element. m.Remove("noone") } func TestRemoveCb(t *testing.T) { m := New[Animal]() monkey := Animal{"monkey"} m.Set("monkey", monkey) elephant := Animal{"elephant"} m.Set("elephant", elephant) var ( mapKey string mapVal Animal wasFound bool ) cb := func(key string, val Animal, exists bool) bool { mapKey = key mapVal = val wasFound = exists return val.name == "monkey" } // Monkey should be removed result := m.RemoveCb("monkey", cb) if !result { t.Errorf("Result was not true") } if mapKey != "monkey" { t.Error("Wrong key was provided to the callback") } if mapVal != monkey { t.Errorf("Wrong value was provided to the value") } if !wasFound { t.Errorf("Key was not found") } if m.Has("monkey") { t.Errorf("Key was not removed") } // Elephant should not be removed result = m.RemoveCb("elephant", cb) if result { t.Errorf("Result was true") } if mapKey != "elephant" { t.Error("Wrong key was provided to the callback") } if mapVal != elephant { t.Errorf("Wrong value was provided to the value") } if !wasFound { t.Errorf("Key was not found") } if !m.Has("elephant") { t.Errorf("Key was removed") } // Unset key should remain unset result = m.RemoveCb("horse", cb) if result { t.Errorf("Result was true") } if mapKey != "horse" { t.Error("Wrong key was provided to the callback") } if (mapVal != Animal{}) { t.Errorf("Wrong value was provided to the value") } if wasFound { t.Errorf("Key was found") } if m.Has("horse") { t.Errorf("Key was created") } } func TestPop(t *testing.T) { m := New[Animal]() monkey := Animal{"monkey"} m.Set("monkey", monkey) v, exists := m.Pop("monkey") if !exists || v != monkey { t.Error("Pop didn't find a monkey.") } v2, exists2 := m.Pop("monkey") if exists2 || v2 == monkey { t.Error("Pop keeps finding monkey") } if m.Count() != 0 { t.Error("Expecting count to be zero once item was Pop'ed.") } temp, ok := m.Get("monkey") if ok != false { t.Error("Expecting ok to be false for missing items.") } if (temp != Animal{}) { t.Error("Expecting item to be nil after its removal.") } } func TestCount(t *testing.T) { m := New[Animal]() for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } if m.Count() != 100 { t.Error("Expecting 100 element within map.") } } func TestIsEmpty(t *testing.T) { m := New[Animal]() if m.IsEmpty() == false { t.Error("new map should be empty") } m.Set("elephant", Animal{"elephant"}) if m.IsEmpty() != false { t.Error("map shouldn't be empty.") } } func TestIterator(t *testing.T) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } counter := 0 // Iterate over elements. for item := range m.Iter() { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ } if counter != 100 { t.Error("We should have counted 100 elements.") } } func TestBufferedIterator(t *testing.T) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } counter := 0 // Iterate over elements. for item := range m.IterBuffered() { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ } if counter != 100 { t.Error("We should have counted 100 elements.") } } func TestClear(t *testing.T) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } m.Clear() if m.Count() != 0 { t.Error("We should have 0 elements.") } } func TestIterCb(t *testing.T) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } counter := 0 // Iterate over elements. m.IterCb(func(key string, v Animal) { counter++ }) if counter != 100 { t.Error("We should have counted 100 elements.") } } func TestItems(t *testing.T) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } items := m.Items() if len(items) != 100 { t.Error("We should have counted 100 elements.") } } func TestConcurrent(t *testing.T) { m := New[int]() ch := make(chan int) const iterations = 1000 var a [iterations]int // Using go routines insert 1000 ints into our map. go func() { for i := 0; i < iterations/2; i++ { // Add item to map. m.Set(strconv.Itoa(i), i) // Retrieve item from map. val, _ := m.Get(strconv.Itoa(i)) // Write to channel inserted value. ch <- val } // Call go routine with current index. }() go func() { for i := iterations / 2; i < iterations; i++ { // Add item to map. m.Set(strconv.Itoa(i), i) // Retrieve item from map. val, _ := m.Get(strconv.Itoa(i)) // Write to channel inserted value. ch <- val } // Call go routine with current index. }() // Wait for all go routines to finish. counter := 0 for elem := range ch { a[counter] = elem counter++ if counter == iterations { break } } // Sorts array, will make is simpler to verify all inserted values we're returned. sort.Ints(a[0:iterations]) // Make sure map contains 1000 elements. if m.Count() != iterations { t.Error("Expecting 1000 elements.") } // Make sure all inserted values we're fetched from map. for i := 0; i < iterations; i++ { if i != a[i] { t.Error("missing value", i) } } } func TestJsonMarshal(t *testing.T) { SHARD_COUNT = 2 defer func() { SHARD_COUNT = 32 }() expected := "{\"a\":1,\"b\":2}" m := New[int]() m.Set("a", 1) m.Set("b", 2) j, err := json.Marshal(m) if err != nil { t.Error(err) } if string(j) != expected { t.Error("json", string(j), "differ from expected", expected) return } } func TestKeys(t *testing.T) { m := New[Animal]() // Insert 100 elements. for i := 0; i < 100; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } keys := m.Keys() if len(keys) != 100 { t.Error("We should have counted 100 elements.") } } func TestMInsert(t *testing.T) { animals := map[string]Animal{ "elephant": {"elephant"}, "monkey": {"monkey"}, } m := New[Animal]() m.MSet(animals) if m.Count() != 2 { t.Error("map should contain exactly two elements.") } } func TestFnv32(t *testing.T) { key := []byte("ABC") hasher := fnv.New32() _, err := hasher.Write(key) if err != nil { t.Errorf(err.Error()) } if fnv32(string(key)) != hasher.Sum32() { t.Errorf("Bundled fnv32 produced %d, expected result from hash/fnv32 is %d", fnv32(string(key)), hasher.Sum32()) } } func TestUpsert(t *testing.T) { dolphin := Animal{"dolphin"} whale := Animal{"whale"} tiger := Animal{"tiger"} lion := Animal{"lion"} cb := func(exists bool, valueInMap Animal, newValue Animal) Animal { if !exists { return newValue } valueInMap.name += newValue.name return valueInMap } m := New[Animal]() m.Set("marine", dolphin) m.Upsert("marine", whale, cb) m.Upsert("predator", tiger, cb) m.Upsert("predator", lion, cb) if m.Count() != 2 { t.Error("map should contain exactly two elements.") } marineAnimals, ok := m.Get("marine") if marineAnimals.name != "dolphinwhale" || !ok { t.Error("Set, then Upsert failed") } predators, ok := m.Get("predator") if !ok || predators.name != "tigerlion" { t.Error("Upsert, then Upsert failed") } } func TestKeysWhenRemoving(t *testing.T) { m := New[Animal]() // Insert 100 elements. Total := 100 for i := 0; i < Total; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } // Remove 10 elements concurrently. Num := 10 for i := 0; i < Num; i++ { go func(c *ConcurrentMap[string, Animal], n int) { c.Remove(strconv.Itoa(n)) }(&m, i) } keys := m.Keys() for _, k := range keys { if k == "" { t.Error("Empty keys returned") } } } func TestUnDrainedIter(t *testing.T) { m := New[Animal]() // Insert 100 elements. Total := 100 for i := 0; i < Total; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } counter := 0 // Iterate over elements. ch := m.Iter() for item := range ch { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ if counter == 42 { break } } for i := Total; i < 2*Total; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } for item := range ch { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ } if counter != 100 { t.Error("We should have been right where we stopped") } counter = 0 for item := range m.IterBuffered() { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ } if counter != 200 { t.Error("We should have counted 200 elements.") } } func TestUnDrainedIterBuffered(t *testing.T) { m := New[Animal]() // Insert 100 elements. Total := 100 for i := 0; i < Total; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } counter := 0 // Iterate over elements. ch := m.IterBuffered() for item := range ch { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ if counter == 42 { break } } for i := Total; i < 2*Total; i++ { m.Set(strconv.Itoa(i), Animal{strconv.Itoa(i)}) } for item := range ch { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ } if counter != 100 { t.Error("We should have been right where we stopped") } counter = 0 for item := range m.IterBuffered() { val := item.Val if (val == Animal{}) { t.Error("Expecting an object.") } counter++ } if counter != 200 { t.Error("We should have counted 200 elements.") } } golang-github-orcaman-concurrent-map-2.0.1/go.mod000066400000000000000000000000651510370004700217160ustar00rootroot00000000000000module github.com/orcaman/concurrent-map/v2 go 1.18