-
-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many threads blocked in LRUMap (with ScalaObjectMapper
) -- increase type cache size?
#428
Comments
TL;DNR: I am open to increasing cache size, but starting with Scala module since I think this is likely to be due to Scala or Scala module's usage (and possibly sheer number of throw-away classes). Aside from that, if at all possible, I would suggest you to see if it would be possible to pre-resolve types ( So: before considering potential for improving cache handling in Specifically as to So... first things first. I think increasing cache size for Scala module probably makes sense. This does not require change to |
I suspect the problem is more likely to happen in Scala because Scala code tends to have more types, particularly with the Scala collection hierarchy, but I don't think it's fair to characterize this as a Scala problem. I can easily imagine a Java app having over 200 types in this cache, especially when using lots of nested types for domain models. To me it makes sense to change the default cache size to something much higher, maybe 1000 or so. Changing the cache implementation is a more involved discussion. If you're concerned about needing a write lock in |
ScalaObjectMapper
)
@pjfanning On Scala side, do you think it'd be easy to change |
@cowtowncoder ScalaObjectMapper is a mixin for ObjectMapper (as opposed to being a subclass).
ScalaObjectMapper gets the TypeFactory from the ObjectMapper. You can create a TypeFactory with a custom LRUMap by using something like You can then call |
@pjfanning Ok. This is why I asked, as I can't even read Scala well. :) Anyway, how about I transfer this issue to |
ScalaObjectMapper
)ScalaObjectMapper
) -- increase type cache size?
This code can be used to create an ObjectMapper with a larger typeCache (1000 max entries instead of 200).
|
@cowtowncoder there is no easy solution to making scala ObjectMappers have a larger default size for the LRUMap. Would it be ok to support a system property that allows users to override the default max of 200? |
I did not mean to change default size for |
@cowtowncoder my suggestion is to leave the LRUMap with a default max of 200 but that if a system property is set then this value will override the default. #428 (comment) -- provides a workaround that affected users can use. |
Ok: unfortunately I am pretty strongly against use of system properties -- they are stateful global singletons, and (in my opinion) something that libraries should use rarely if ever. One of design goals with Jackson mappers is to keep them isolated, with their own configuration, and global state settings go against that goal. So I don't want to add support that way in jackson-databind. Then again, I am not sure if this is problem any more: or if it is, whether cache size change would help significantly. Maybe it is more a symptom of the way Scala mapper overuses type resolution. |
@pjfanning Would the recent changes to 2.12 help here? |
@cowtowncoder You still need to explicity set a new type cache - eg #428 (comment) |
@pjfanning Ah. Hmmh. Yes, of course, it was the global cache, not specific to module. I think we went over same ideas, including bigger changes to allow users to configure caching aspects more generally. I think I'll add a note somewhere on this, as "maybe in 2.13" kind of thing. |
Older me thought of this back then already :) |
While this issue is not necessarily completely solved, it may be significantly helped by: FasterXML/jackson-databind#3530 which will be in Jackson (databind) 2.14. Specifically it should help by using actual LRU implementation so that if the working set fits in the maximum size there won't be "clear all" events which may be what is happening currently. |
(NOTE: moved from
jackson-databind
on 11-Sep-2019 by @cowtowncoder )Version: Jackson 2.8.11, probably also 2.9.7
We have an API server that receives requests then makes ElasticSearch requests using the elastic4s library. The JSON parsing is done using Jackson under the hood. We’ve noticed a lot of threads blocked on the synchronized blocks in the
LRUMap
data structure, which seems to be contributing to slow performance on our API server.Several threads are blocked on puts to the map, for example:
and another thread blocked on
clear()
:Based on heap dumps, we see that the
_typeCache
map inTypeFactory
is getting filled up quickly with type information for the JSON objects we’re parsing, then the map is getting cleared, then the process is repeated quickly. Since theclear()
operation is relatively expensive and inside a synchronized block, it blocks all other threads from making progress, and this can happen repeatedly in a short period of time.We could replace the
ConcurrentHashMap
with a synchronizedLinkedHashMap
with an LRU eviction policy, and/or increase the size of the cache. Either way, clearing the cache seems like an inefficient solution, especially when many threads are trying to use the cache.The text was updated successfully, but these errors were encountered: