I was testing out the retry mechanism of the android tracker using android emulators, I’ve seen it retry sending requests for 404 and 500 network errors. Was wondering if the tracker retries sending requests for all sorts of errors or only for limited error types?
I’ll let others correct me, but looking at the tracker source code, it appears that every request with status code not in the range of 200 to 300 is deemed as failed and retried.
The mechanism here is common to all the trackers - there’s a buffer of queued events, and if a request fails, it stays in the queue to be retried later. When the tracker attempts to send any event, it will attempt to send the other events in the buffer also. There is no differentiation for different errors, all are treated the same.
The principle here is that every event tracked should land at the collector as soon as possible (within the batching configuration), without scope to drop events. The collector is built for availability - usually provisioned with multiple instances across multiple regions, and behind a load balancer. In the unlikely event that a collector does go down, this tracker mechanism prevents data loss even still.
If your tracker is getting 4XX or 5XX responses because of some misconfiguration (eg. a wrong endpoint), then the failed events will stay in the buffer until this issue is resolved. However once you update the app with a fix, they should be successfully sent to the newly configured endpoint (unless the new configuration changes the tracker’s namespace or replaces the tracker instance altogether).
Thank you @matus and @Colm
That cleared my doubts!
So if the tracker’s namespace and the endpoint are changed as mentioned, the requests in the buffer would not get across to the collector and are cleared, am I right?
I’m actually not 100% sure - it’s typically something that wouldn’t be done.
I think that what this would do is establish a new tracker, but the assets of the old tracker would probably still exist on disk until the cache is cleared. But I’m not certain to be honest!
I confirm it. From the mobile trackers v2 you can have multiple tracker instances in the same app. Those instances are distinguished by the tracker namespace. All the events tracked with a namespace but not sent remain attached to that specific namespace.
In theory, in a single tracker app the namespace should never change.
Hi @Alex_Benini and @Colm,
Is there a way to control the amount of data that the tracker can store in the database to prevent the source application from crashing?
Ex: Storing only one day worth of failed events to prevent the app from crashing? (When collector is unavailable)
Thanks for raising this. Unfortunately, there isn’t anything like that at the moment. It’s definitely an interesting feature. We already discussed this in the past but it never got enough attention. We are now working on the v3 of our mobile trackers, so we can see if there is room to introduce this feature.
For now, you can programmatically clear the tracker databases out using an undisclosed static method of
You have to pass as arguments a list of namespaces that you want to keep. It will clear all the databases associated to the namespaces that you haven’t passed.
Another option is to use the remote configuration of the tracker to force a tracker instance (with a specific namespace) to use a different valid collector. In this way the database will be automatically flushed out.
Oh, okay, I’ll try clearing the tracker database using the static method.
Thank you @Alex_Benini!
I tried using the method you mentioned earlier and it worked all right.
I looked at the SQLiteEventStore class and wrote the following method to clear the database with the days parameter being the age of data
Are there better ways to do this?
Also, I was wondering if there was some way to access the queryDatabase function of the SQLiteEventStore class
Your solution is good! I don’t think there are better ways. Maybe, not a problem in your case but it would be good to double-check if there are concurrent accesses from multiple threads.
Unfortunately, I don’t have solutions for this. But I agree that it wouldn’t hurt if we make it public.
We will consider this option for the future versions.
Thanks @Alex_Benini for the suggestion on concurrent access, will take a look at it!!