• 4 Posts
  • 120 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • I’m taking a guess that perhaps the fridge makes similar assumptions that automobiles make for their lamps. Some cars that were designed when incandescent bulbs were the only option will use the characteristics resistance as an integral part of the circuit. For example, turn signals will often blink faster when either the front or left corner bulb is not working, and this happens to be useful as an indicator to the motorist that a bulb has gone bust.

    For other lamps, such as the interior lamp, the car might do a “soft start” thing where upon opening the car door, the lamp ramps up slowly to full brightness. If an LED bulb is installed here, the issues are manifold: some LEDs don’t support dimming, but all incandescent bulbs do. And the circuit may require the exact resistance of an incandescent bulb to control the rate of ramping up to fill brightness. An LED bulb here may malfunction or damage the car circuitry.

    Automobile light bulbs are almost always supplied with 12 volts, so an aftermarket LED replacement bulb is designed to also expect 12 volts, then internally convert down to the native voltage of the LEDs. However, in the non-trivial circuits described above, the voltage to the bulb is intentionally varying. But the converter in the LED still tries to produce the native LED voltage, and so draws more current to compensate. This non-linear behavior does not follow Ohm’s Law, whereas all incandescent bulbs do.

    So my guess is that your fridge could possibly be expecting certain resistance values from the bulb but the LED you installed is not meeting those assumptions. This could be harmless, or maybe either the fridge or the LED bulb have been damaged. Best way to test would be installing a new, like-for-like OEM incandescent bulb and seeing if that will work in your fridge.



  • To start, the idea of charging in parallel while discharging in series is indeed valid. And for multicell battery packs such as for electric automobiles and ebikes, it’s the only practical result. That said, the idea can sometimes vary, with some solutions providing the bulk of charging current through the series connection and then having per-cell leads to balance each cell.

    In your case, you would have a substantial number of cells in series, to the point that series charging would require high voltage DC, beyond the normal 50-60 VDC that constitutes low-voltage.

    But depending on if charging and discharge are mutually exclusive operations, one option would be to electrically break the pack into smaller groups, so that existing charge controllers can charge each group through normal means (ie balancing wires). Supposing that you used 12s charger ICs, that would reduce the number of ICs to about 9 for a pack with a nominal series voltage ~400vdc. You would have to make sure these ICs are isolated once the groups are reconstituted into the full series arrangement.

    Alternatively, you could float all the charging ICs, by having 9 rails of DC voltage to supply each of the charging ICs. And this would allow continuous charging and battery monitoring during discharge. Even with the associated circuitry to provide these floating rails, the part count is still lower than having each cell managed by individual chargers and MOSFETs.

    It’s not clear from your post what capacity or current you intend for this overall pack, but even in small packs, I cannot possibly advise using anything but a proper li-ion charge controller for managing battery cells. The idea of charging a capacitor to 4.2v and then blindly dumping voltage into a cell is fraught with issues, such as lacking actual cell temperature monitoring or even just charging the cell in a healthy manner. Charge IC are designed specifically designed for the task, and are just plain easier to build into a pack while being safer.


  • I can accept the premise that LLMs are being used to write Commons speeches – MPs are also people, I’m told – but these graphs suggest that LLMs are overusing certain stock phrases which have existed in the business world and apparently in Commons speeches since at least 2007.

    What puzzles me is why LLMs are more prone to using these particular phrases. Does this happen for all users of LLMs, or only when British MPs in particular are requesting a speech?

    I’d be interested to know if the same trend for the same phrases can be found in the Canadian House of Commons, since although they also follow much of the same procedures, North American English should skew the frequencies of certain words. So if the same trend can be found, then that suggests that the common LLMs do lean towards certain phrases. But if the trend is not statistically significant in Canada, then perhaps British MPs issue different prompts than their Canadian counterparts.

    What I’m saying is that I rise today to highlight additional avenues of intrigue, as MPs and citizens alike are navigating a world where AI supposedly streamlines daily activities. That certain trends may or may not exist underscores the gravity of this seemingly bustling industry that we call AI.

    [just to be clear, that last paragraph is entirely in jest]


  • If only one side of the switch/points remain, depending on the type of crossing and condition of the wheels, there’s a chance that the trolley’s right side wheels can jump over the switch and continue straight forward, even as the switch is set to diverge onto the non-existent siding.

    Or it could derail but continue barreling forward anyway. But trolleys don’t tend to be going that fast.


  • I did indeed have a chuckle, but also, this shouldn’t be too foreign compared to other, more-popular languages. The construction of func param1 param2 can be found in POSIX shell, with Bash scripts regularly using that construction to pass arguments around. And although wrapping that call with parenthesis would create a subshell, it should still work and thus you could have a Lisp-like invocation in your sh script. Although if you want one of those parameters to be evaluated, then you’re forced to use the $() construction, which adds the dollar symbol.

    As for Lisp code that often looks like symbol soup, like (= 0 retcode), the equal-sign is just the name for the numerical equality function, which takes two numbers. The idea of using “=” as the function name should not be abnormal for Java or C++ programmers, because operator overload allows doing exactly that.

    So although it does look kinda wonky for anyone that hasn’t seen Lisp in school, sufficient exposure to popular codebases and languages should impart an intuition as to how Lisp code is written. And one doesn’t even need to use an RPN calculator, although that also aids understanding of Lisp.

    Addendum: perhaps in a century, contemporary programmers will find it bizarre that C used the equal-sign to mean assignment rather than equality, when the <= arrow would more accurately describe assignment, while also avoiding the common error of mixing up = and == in an if-conditional. What looks normal today will not necessarily be so obvious in hindsight.



  • But could this comparison not be done with some hysteresis?

    It can, but analog design is also not my forte.

    The part count is not important as long as the parts aren’t terribly expensive, since this is exclusively for my personal use

    In that case, the original suggestion of using an ADC and an op-amp would be the most flexible for software. You would, however, need to do some research on wiring an op-amp to amplify the sense voltage to something your microcontroller’s ADC is capable of resolving.



  • Ah, I entirely missed the sense pin when skimming the datasheet.

    That said, using a shunt for an inductive load like a motor may have to contend with the corresponding spikes caused when switching the motor. This just means the thing doing the sensing needs to tolerate the spikes. Or mitigate them, with either a snubber or a flyback diode (is this actually doable with an H bridge?).

    As for the op-amp and ADC, if we already accept the additional of the op-amp part, it is also feasible to instead use a comparator with a reference voltage set for the max safe current. The digital output of the comparator can then be fed directly to the microcontroller as an interrupt, providing fast reaction without the sampling time of an ADC. But this would be so quick that the spikes from earlier could get picked up, unless mitigated. It also means software will not know the exact current level, other than that it’s higher than the threshold set by the reference voltage.

    Still, these solutions are adding to the part count. If that’s a concern, then I’d look for a motor driver with this functionality built in.


  • In that case, I would suggest looking at a different motor driver. The driver you’ve specified doesn’t seem to have any provisions to detect a motor stall, which is something that other drivers can potentially do. Ideally, the driver would detect the back EMF from the stall and inform the microcontroller, which would then decide to stop movement.

    An external current sensor might work, but that’s adding to the part count and might not be as capable as built-in functionality within the motor driver. Plus, fancier motor drivers have some neat features that you could take advantage of as well. I think it would be more prudent to consider a different driver before adding additional parts.


  • I don’t think there’s a good way to adapt this circuit to provide current limiting on the 18v rail. Supposing that it was possible, what behavior do you want to happen when reaching the current limit? Should the motor reduce its output torque when at the limit? Should the 18v rail completely shut down? Should the microcontroller be notified of the current limit so that software can deal with it? Would a simple fuse be sufficient?

    All of these are possible options, but with various tradeoffs. But depending on your application, I would think the easiest design is to build sufficient capacity on the 18v rail so that the motor and 5v converter inherently never draw more current than can be provided.



  • I suppose the first question is whether you had the baud rate set correctly. The photo of the “cleaned up signals” (not entirely sure what you did, compared to the prior photo) seems to show a baud rate of 38400, given that each bit seems to take about 25 microseconds.

    As for the voltage levels, the same photo seems to show 5v TTL. So it doesn’t seem like you would need a level converter from 15v RS-232 levels. This is one of the few times where the distinction between a “serial port” and an RS-233 port makes a difference, but a lot of data center switches will deal using 5v TTL, because the signals aren’t having to travel more than maybe 5 meters


  • Methodology:

    To determine the bath, shower and bidet hotspots around the world, we calculated the percentage of hotel bookings in each country, state and city that have showers, baths or bidets.

    We used Booking.com to determine the total number of accommodations (hotels, apartments, holiday rentals, etc.) in each geography and then found the number of accommodations in each geography that have either baths, showers or bidets using Booking.com filters.

    I was unable to find an option for “baths, showers, or bidet” in the booking.com filters, let alone options for each of those individually. So I’m not sure about the exact data used for this infographic.




  • I’ve changed the setting to prevent the behavior, but the prompt is still missing.

    You’ve disabled the automatic switching based on HDMI CEC, and yet the TV still automatically switches and without a notification/option in advance? This just sounds like a firmware update for the TV introduced a bug.

    I’m in the same camp with the other commenter who suggested never attaching a so-called smart TV to the Internet, for then it can never perform an unwanted update. Because for whatever neat features an update may bring, it rarely can be reversed if proven to be undesirable. I’m staunchly in the “own your hardware” camp, so automatic-and-non-undoable updates are antithetical to any notion of right-to-repair principles, and will inevitably lead to more disposable and throwaway electronics.

    [gets off soapbox]

    Your best bet might to try attempting a manual software downgrade using a USB stick.



  • Based solely on this drawing – since I don’t have a datasheet for the PWM controller depicted – it looks like the potentiometer is there to provide a DC bias for the input Aux signal. I draw that conclusion based on the fact that the potentiometer has its extents connected to Vref and GND, meaning that turning the wiper would be selecting a voltage somewhere in-between those two voltage levels.

    As for how this controls the duty cycle of the PWM, it would depend on the operating theory of the PWM controller. I can’t quite imagine how the controller might produce a PWM output, but I can imagine a PDM output, which tends to be sufficient for approximating coarse audio.

    But the DC bias may also be necessary since the Aux signal might otherwise try to go below GND voltage. The DC bias would raise the Aux signal so that even its lowest valley would remain above GND.

    So I think that’s two reasons for why the potentiometer cannot be removed: 1) the DC bias is needed for the frequency control, and 2) to prevent the Aux signal from sinking below GND.

    If you did want to replace the potentiometer with something else, you could find a pair of fixed resistors that would still provide the DC bias. I don’t think you could directly connect the Aux directly into the controller.