IngeniousRocks (They/She)

Don’t DM me without permission please

  • 0 Posts
  • 123 Comments
Joined 11 months ago
cake
Cake day: December 7th, 2024

help-circle









  • This is correct with unmanaged batteries. Batteries with a BMS however will never get below whatever voltage is set as their 0% unless allowed to sit at 0% for long enough that e n t r o p y occurs and the charge slowly dissipates over time. This will happen even with a fully charged battery left to its own devices (ba dum tss) for too long.

    The point of the BMS is to manage the health of the potentially dangerous lithium batteries, and as long as they are used within spec it should keep voltages from getting so low the batteries enter a state of deep discharge, as well as prevent overcharging due to imbalanced charging rates or other similar issues.

    Used is the important word here. A battery must be used to maintain it’s health. A battery must also not be abused to maintain its health.

    Now none of that touches on what you said, but was important background for this to make sense: The BMS will report to you whatever values it deems safe charging and Discharging limits based on factors like internal resistance and temperature. As a result 20-80% of an unmanaged battery is close to 0-100% of a managed one in new condition because the BMS will cut power before unsafe discharge limits are reached, and will stop charging to prevent overcharge once those limits are reached.





  • When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.

    If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I’m not sure about speed though, I haven’t personally used AMDs GPUs since around 2015.



  • If you’re planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?

    I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I’m using, and attach the files from the repo I’m working in (always a local repo in my case) I can usually explain what I’m doing, what I’m trying to accomplish, and what I’ve tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)