Implementing the right jitter buffer management is not something I can simply prescribe. That said, I would not simply say "increase it by 10%", for example. If you are dealing with voice packets, what you want is to play the voice as quickly as possible without experiencing excessive buffer under-runs.
Let's assume you have 10ms G.711 "frames" sent one "frame" per packet. If you transmit the packet across the globe, the delay might be 100 or even 300ms. There is also delay in encoding and decoding. So, you do want to minimize any additional delay by having an excessively long jitter buffer.
When the PSTN was the only means by which voice communication occurred, the ITU recommended an end-to-end delay that did not exceed 100ms. I would still argue that is ideal, but packet-switched networks today often do introduce more delay. So, the new recommendation is to not exceed 400ms. However, most would agree that is high. Over a satellite link, though, it might be the norm.
So, when you get a packet, try to play it. If the next packet arrives too late, take note of that excess delay. Let's say the packet arrived 2ms too late. That would suggest there is a max average jitter of at least 2ms. Perhaps the second packet was delayed 10ms. Again, you can assume that the max average jitter is 10ms.
Keep in mind that the IP network is always changing. So, don't maintain jitter buffer calculations for extended periods of time. I would personally consider maintaining an average jitter measurements over the last 5 minutes. Specifically, I would maintain an average of how much in excess a packet is "late". For jitter buffer management, you're not so concerned about end-to-end delay, but you're concerned with how late a packet is relative to when it should have arrived based on the current clock.
Let's suppose you just started playing a packet:the time is 0ms. You are maintaining a buffer of 3 10ms frames. At 8ms into playing this packet, you receive a new packet. This packet is early by 2ms. That's good, but might suggest the jitter buffer is too large. Perhaps the packet arrives after 15ms. It's effectively 5ms late. That suggests the jitter buffer might be too small. So long as the packets are arriving within this window, though, having 3 10ms packets in the buffer is plenty. There would be no need to increase the jitter buffer.
Now, at what point do you increase the buffer? Perhaps if you see the excess delay exceeding the number of ms of audio in the jitter buffer. That is, if you have 30ms of audio in the buffer, but are seeing average excess delays of 50ms, perhaps you want more packets in the jitter buffer.
Now, is maintaining an average sophisticated enough? Or, do you need to maintain also look at the standard deviation? Don't forget to thow out statistical outliers, too. You don't want to compute in the average any packet that arrives significantly later than others. Plot what you see on a graph. Ideally, packets should arrive such that a histogram of those packets would look like a normal distribution. Over time, you might see shifts in the median and it's those shift in the median that would trigger a change in the jitter buffer size. There is a bit of math involved in this exercise, but I think this is your assignment
Your task is to figure how how to mimimize delay while also minimizing the number of discarded packets due to lateness. If you create the perfect minimization function given playout delay and discard rate, then you have the answer to your question.