IngeniousRocks (They/She)

Don’t DM me without permission please

  • 0 Posts
  • 115 Comments
Joined 11 months ago
cake
Cake day: December 7th, 2024

help-circle

  • This is correct with unmanaged batteries. Batteries with a BMS however will never get below whatever voltage is set as their 0% unless allowed to sit at 0% for long enough that e n t r o p y occurs and the charge slowly dissipates over time. This will happen even with a fully charged battery left to its own devices (ba dum tss) for too long.

    The point of the BMS is to manage the health of the potentially dangerous lithium batteries, and as long as they are used within spec it should keep voltages from getting so low the batteries enter a state of deep discharge, as well as prevent overcharging due to imbalanced charging rates or other similar issues.

    Used is the important word here. A battery must be used to maintain it’s health. A battery must also not be abused to maintain its health.

    Now none of that touches on what you said, but was important background for this to make sense: The BMS will report to you whatever values it deems safe charging and Discharging limits based on factors like internal resistance and temperature. As a result 20-80% of an unmanaged battery is close to 0-100% of a managed one in new condition because the BMS will cut power before unsafe discharge limits are reached, and will stop charging to prevent overcharge once those limits are reached.





  • When/If you do, a RTX3070-lhr (about $300 new) is just about the BARE MINIMUM for gpu inferencing. Its what I use, it gets the job done, but I often find context limits too small to be usable with larger models.

    If you wanna go team red, Vulkan should still work for inferencing and you have access to options with significantly more VRAM, allowing you to more effectively use larger models. I’m not sure about speed though, I haven’t personally used AMDs GPUs since around 2015.



  • If you’re planning on using LLMs for coding advice, may I recommend selfhosting a model and adding the documentation and repositories as context?

    I use a a 1.5b qwen model (mega dumb) but with no context limit I can attach the documentation for the language I’m using, and attach the files from the repo I’m working in (always a local repo in my case) I can usually explain what I’m doing, what I’m trying to accomplish, and what I’ve tried to the LLM and it will generate snippets that at the very least point me in the right direction but more often than not solve the problem (after minor tweaks because dumb model not so good at coding)







  • With modern Lithium ion batteries its because as their capacity decreases over time the BMS can’t always keep up and recover the 100% point unless you’re occasionally draining it all the way. This can result in someone charging their battery to say 97% and leaving it for hours to reach a 100% it will never reach. This is potentially unsafe as it heats up the battery.

    Edit: Autocorrupt beansed up my comment



  • If you don’t mind used, I had great luck with my Samsung L2020w, cat got some celophane stuck in it and it died (for good)

    Replaced it with a Brother HL-series because they’re what I used at work.

    The lil guys are beasts.

    I’d normally not wanna go with the “big corpo option” but for a printer without HP shenanigans its really great.

    If you set up with the apps it will try to get you to subscribe to an ink subscription, this is not required.




  • So there’s some data its concatenating from some lists and running some checks on. If one if the checks succeeds it runs a function called clearInfo, I don’t know js so idk if that’s a built-in or if they defined it elsewhere in the script. If that check fails it runs clearInterval. This is all wrapped in a function called setInterval, and it looks like if all the checks succeed and the interpreter isn’t moved to a different section of code by those functions called earlier then it will set whatever this interval is to 10000, presumably milliseconds. That’s the big block in the middle.

    The top block calls some code referencing a document, which appears to be stored as a list, index 12 is referenced and another obfuscated argument is applied to it.

    The bottom block appears to be defining a function that will interact with the document referenced above. Calling the function showInfo and presumably concatenating and formatting some data into a pretty output for a user to get info regarding the document being referenced.

    Someone who actually knows js could probably tell you more, I know python and c++ and this looks kinda like python but the syntax is a bit different so I could be way off