The memory used by amqrmppa (channel pool) processes will be entirely dependant on the channels running in those pool processes. If a sender-receiver channel is moving large messages it will use more memory because it will have to create staging areas to reconstruct the messages from the 32K chunks that are used to send it across the network. If the channel becomes idle (heartbeats flow) then it will free those off, but if it remains busy it may hold onto those areas for longer. It may also reduce the size of memory retained if it doesn't see any big messages for a while. So it's very dependant on your pattern of messages and what size they are.
You also mention a badly behaved application, so it sounds like you have SVRCONN channels in your pool processes as well. These will have a memory usage much more dependant on the client application behaviour which is harder to quantify.
How many channels do you have running in each pool process? Are the ones "up at the 230k mark" the ones with the most channels in them? Use DISPLAY CHSTATUS(*) JOBNAME to see the PID and TID of the pool process used for each running channel.
I don't understand why you are not getting the PID and TID from the JOBNAME. Can you post an example of the JOBNAME field you see in your DISPLAY CHSTATUS output?
You could look into the SVRCONN attribute DISCINT to tidy up idle connections from the queue manager end.
Your problem may be due to applications such as web servers or "harnesses" doing connection pooling and not freeing the sessions afterwards. The harness may hide MQ from the applications. The application has finished with the connection but the "pooling code" has not given it back. There could be a tuning parameters along the lines of keep minimum pool size active. Monitoring threads and see if thread ever get given back.
You could try an activity trace (overnight) and see if the theads are doing anything, when the system is idle.