I perform some fairly complex calculations on the data I return from a query. Complex enough I don’t want to increase the limit of data I return above the standard 50. Even this might be too much, will have to see how it performs on the live site.
I don’t need the full set of data anyways so I would like to retrieve some random sampling. I was thinking of a few possibilities.
-
Somehow get the total number of rows, the gettotalcount() seems to only return the filtered number, so it would be whatever the limit is which isn’t helpful.
-
Manually keep track of the total number of rows and have that accessible (for now this would be fine but far from ideal once users can create their own content)
-
randomly pick a number to skip that is somewhat close to what the total number of rows are, if the return doesn’t return a full 50, pick a different smaller number
-
Or even better just get a random 50 items
Is there any possible way to do this cleanly? If I always just use the first 50 items some will never be used. As the table gets larger this would be true no matter how I sort the data, there would always be items that never get utilized.
What I am doing is iterating through all 50 items. Making a new list of items that are possible to use. Then picking one of that subset randomly for assignment. The math to figure out which are possible takes some time ~50ms per item on sandbox, and I have 3 groupings of items to look through so about 7.5 seconds total. I can’t really prefilter using only basic operators like eq, lt, etc so the query itself just needs to return some data I can work with.
Any idea would be helpful. It isn’t a huge issue yet as there will be right around 50 of each, but this will expand to > 1000 in the future