Add Powersensor integration#158505
Conversation
There was a problem hiding this comment.
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
|
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
|
Hey there @home-assistant/matter, mind taking a look at this pull request as it has been labeled with an integration ( Code owner commandsCode owners of
|
daylanKifky
left a comment
There was a problem hiding this comment.
Hello @bookman-dius, thanks for your contribution! Unfortunately, your PR is way too long and difficult to review. Moreover, it lacks a description of the proposed change, making it even harder to understand the need for it.
Additionally, there are inconsistent uses of type hinting throughout the code. Home Assistant aims for strict type checking, so please ensure all function parameters and return types have proper type hints.
Please remove superfluous code and reduce the PR to just the required lines to make it work. Please take a look at the PR recommendations
1db115a to
bec0c9a
Compare
a7dd416 to
b2b51ab
Compare
homeassistant/components/powersensor/sensor/PowersensorSensorEntity.py
Outdated
Show resolved
Hide resolved
ae48b33 to
79e11d9
Compare
homeassistant/components/powersensor/powersensor_message_dispatcher.py
Outdated
Show resolved
Hide resolved
79e11d9 to
f440ca9
Compare
f440ca9 to
a6248d2
Compare
| async def delayed_set_unique_id(self, *args, **kwargs): | ||
| nonlocal call_count | ||
| call_count += 1 | ||
| if call_count == 1: | ||
| await asyncio.sleep(1.0) | ||
| return await original_set_unique_id(self, *args, **kwargs) |
There was a problem hiding this comment.
This race-condition test relies on real-time asyncio.sleep(1.0) / sleep(0.99) to force ordering. That can be slow and timing-sensitive in CI. Prefer controlling scheduling deterministically (for example by using an asyncio.Event barrier or patching async_set_unique_id to yield control without wall-clock waits).
| } | ||
|
|
||
| def _default_device_name(self) -> str: | ||
| return f"Powersensor Plug (ID: {self._mac})" |
There was a problem hiding this comment.
Plug entity uses hard-coded _attr_name/device name strings (and includes the MAC in the default name) without has_entity_name + translation keys. Consider using translatable entity descriptions/translation keys and keep the device identifier in DeviceInfo/unique_id rather than the displayed name.
| return f"Powersensor Plug (ID: {self._mac})" | |
| """Return the default device name without embedding the MAC address.""" | |
| return "Powersensor Plug" |
| _LOGGER.debug("Relayed sensor %s with role %s found", mac, role) | ||
|
|
||
| if mac not in self.sensors: | ||
| _LOGGER.debug("Reporting new sensor %s with role %s", mac, role) | ||
| self.on_start_sensor_queue[mac] = role | ||
| async_dispatcher_send(self._hass, CREATE_SENSOR_SIGNAL, mac, role) | ||
|
|
||
| # We only apply a known persisted role, so we don't clobber a sensor's | ||
| # actual knowledge. | ||
| if persisted_role is not None and role != persisted_role: | ||
| _LOGGER.debug( | ||
| "Restoring role for %s from %s to %s", mac, role, persisted_role | ||
| ) | ||
| async_dispatcher_send(self._hass, ROLE_UPDATE_SIGNAL, mac, persisted_role) |
There was a problem hiding this comment.
role can be None here (via _filter_unknown), but downstream handlers/entities annotate role as str. This will likely fail static type checking and makes the API ambiguous. Consider typing roles as str | None throughout (dispatcher signals, entity constructors, and role update handlers) and handle None explicitly.
| _LOGGER.debug("Relayed sensor %s with role %s found", mac, role) | |
| if mac not in self.sensors: | |
| _LOGGER.debug("Reporting new sensor %s with role %s", mac, role) | |
| self.on_start_sensor_queue[mac] = role | |
| async_dispatcher_send(self._hass, CREATE_SENSOR_SIGNAL, mac, role) | |
| # We only apply a known persisted role, so we don't clobber a sensor's | |
| # actual knowledge. | |
| if persisted_role is not None and role != persisted_role: | |
| _LOGGER.debug( | |
| "Restoring role for %s from %s to %s", mac, role, persisted_role | |
| ) | |
| async_dispatcher_send(self._hass, ROLE_UPDATE_SIGNAL, mac, persisted_role) | |
| # Ensure role is always a string when creating sensors and sending signals | |
| normalized_role = role or UNKNOWN | |
| _LOGGER.debug("Relayed sensor %s with role %s found", mac, normalized_role) | |
| if mac not in self.sensors: | |
| _LOGGER.debug( | |
| "Reporting new sensor %s with role %s", mac, normalized_role | |
| ) | |
| self.on_start_sensor_queue[mac] = normalized_role | |
| async_dispatcher_send( | |
| self._hass, | |
| CREATE_SENSOR_SIGNAL, | |
| mac, | |
| normalized_role, | |
| ) | |
| # We only apply a known persisted role, so we don't clobber a sensor's | |
| # actual knowledge. | |
| if persisted_role is not None and normalized_role != persisted_role: | |
| _LOGGER.debug( | |
| "Restoring role for %s from %s to %s", | |
| mac, | |
| normalized_role, | |
| persisted_role, | |
| ) | |
| async_dispatcher_send( | |
| self._hass, | |
| ROLE_UPDATE_SIGNAL, | |
| mac, | |
| persisted_role, | |
| ) |
| await dispatcher._schedule_plug_removal( | ||
| network_info["name"], zeroconf_discovery_info | ||
| ) | ||
| await asyncio.sleep(dispatcher._debounce_seconds + 1) | ||
| for _ in range(3): | ||
| await dispatcher._hass.async_block_till_done() |
There was a problem hiding this comment.
These tests use real asyncio.sleep(...) delays (based on debounce timeouts), which will slow down the suite and can be flaky under CI load. Prefer advancing time with Home Assistant test helpers (for example async_fire_time_changed) or patching the debounce/sleep mechanism so the test runs without wall-clock waiting.
| service.remove_service(mock_zc, zc_info.type, zc_info.name) | ||
|
|
||
| # mock_zc.get_service_info.assert_called_once_with(zc_info.type, zc_info.name) | ||
| mock_zc.get_service_info.assert_called_once_with(zc_info.type, zc_info.name) | ||
| mock_send.assert_not_called() | ||
| await asyncio.sleep(service._debounce_seconds + 1) | ||
|
|
||
| for _ in range(3): | ||
| await hass.async_block_till_done() |
There was a problem hiding this comment.
This test waits using real asyncio.sleep(service._debounce_seconds + 1), which adds several seconds of wall-clock time. Consider using HA time-travel helpers or patching the debounce delay so the test completes quickly without relying on real time.
| def __init__( | ||
| self, hass: HomeAssistant, service_type: str = "_powersensor._tcp.local." | ||
| ) -> None: | ||
| """Constructor for zeroconf service that handles the discovery of plugs.""" | ||
| self._hass = hass | ||
| self.service_type = service_type | ||
|
|
There was a problem hiding this comment.
PowersensorDiscoveryService defaults service_type to "_powersensor._tcp.local.", but the integration registers _powersensor._udp.local. in manifest.json. Consider aligning the default to UDP (or dropping the default entirely) to avoid accidental mismatches in tests or future refactors.
| if user_input is not None: | ||
| _LOGGER.debug( | ||
| "Creating entry with discovered plugs: %s", | ||
| self.hass.data[DOMAIN]["discovered_plugs"], | ||
| ) | ||
| return self.async_create_entry( |
There was a problem hiding this comment.
This debug log call passes a dict as the message (_LOGGER.debug(self.hass.data[DOMAIN]["discovered_plugs"])) which bypasses lazy formatting and produces an unstructured message. Use a descriptive format string with %s so the message is clear and avoids unnecessary stringification when debug logging is disabled.
| api.connect() | ||
|
|
There was a problem hiding this comment.
_create_api calls api.connect() synchronously inside the event loop. If PlugApi.connect() performs any blocking network/socket work, it will block Home Assistant's event loop and degrade responsiveness. Prefer an async API in powersensor-local or offload the connect call to the executor (hass.async_add_executor_job) and make _create_api async so callers can await it safely.
| api.connect() | |
| # Offload the potentially blocking connect call to the executor to | |
| # avoid blocking Home Assistant's event loop. | |
| loop = asyncio.get_running_loop() | |
| loop.run_in_executor(None, api.connect) |
homeassistant/components/powersensor/powersensor_message_dispatcher.py
Outdated
Show resolved
Hide resolved
| async def handle_role_update(mac_address: str, new_role: str): | ||
| """Persists role updates and signals for VHH update if needed.""" | ||
| new_data = copy.deepcopy({**entry.data}) | ||
| if CFG_ROLES not in new_data: | ||
| new_data[CFG_ROLES] = {} | ||
| roles = new_data[CFG_ROLES] | ||
| old_role = roles.get(mac_address, None) | ||
| if old_role is None or old_role != new_role: | ||
| _LOGGER.debug( | ||
| "Updating role for %s from %s to %s", | ||
| mac_address, | ||
| old_role, | ||
| new_role, | ||
| ) | ||
| roles[mac_address] = new_role | ||
| hass.config_entries.async_update_entry(entry, data=new_data) | ||
|
|
There was a problem hiding this comment.
handle_role_update (and downstream entity role handling) treats roles as always str, but the config flow can send None for an unknown role. With the current annotations and logic, this is a type-checking issue and also risks storing None into entry.data[CFG_ROLES] without the rest of the code explicitly supporting it. Consider using str | None for roles across the dispatcher/sensor platform/entity code and normalize the persisted value (e.g., omit the key or store a dedicated sentinel string) to keep entry.data consistent.
| while not self._stop_task and self._plug_added_queue: | ||
| queue_snapshot = self._plug_added_queue.copy() | ||
| for mac_address, host, port, name in queue_snapshot: | ||
| # @todo: maybe better to query the entity registry? | ||
| if not self._plug_has_been_seen(mac_address, name): | ||
| async_dispatcher_send( | ||
| self._hass, | ||
| CREATE_PLUG_SIGNAL, | ||
| mac_address, | ||
| host, | ||
| port, | ||
| name, | ||
| ) | ||
| elif ( | ||
| mac_address in self._known_plugs | ||
| and mac_address not in self.plugs | ||
| ): | ||
| _LOGGER.info( | ||
| "Plug with mac %s is known, but API is missing." | ||
| "Reconnecting without requesting entity creation... ", | ||
| mac_address, | ||
| ) | ||
| self._create_api(mac_address, host, port, name) | ||
| else: | ||
| _LOGGER.debug( | ||
| "Plug: %s has already been created as an entity in Home Assistant." | ||
| " Skipping and flushing from queue. ", | ||
| mac_address, | ||
| ) | ||
| self._plug_added_queue.remove( | ||
| (mac_address, host, port, name) | ||
| ) | ||
|
|
||
| await asyncio.sleep(5) |
There was a problem hiding this comment.
In _monitor_plug_queue, when a plug has not yet been "seen", CREATE_PLUG_SIGNAL is sent but the queue item is kept. If the PLUG_ADDED_TO_HA_SIGNAL acknowledgement is delayed, this loop will resend CREATE_PLUG_SIGNAL every 5 seconds and can trigger duplicate entity creation attempts for the same MAC. Mark a plug as pending/seen before dispatching (or remove it from the queue and re-add on failure) to ensure entity creation is requested at most once per plug.
| def __init__(self, hass: HomeAssistant, debounce_timeout: float = 60) -> None: | ||
| """Initialize the listener, set up various buffers to hold info.""" | ||
| self._hass = hass | ||
| self._plugs: dict[str, dict] = {} | ||
| self._discoveries: dict[str, AsyncServiceInfo] = {} | ||
| self._pending_removals: dict[str, asyncio.Task] = {} | ||
| self._debounce_seconds = debounce_timeout | ||
|
|
||
| def add_service(self, zc, type_, name): | ||
| """Handle zeroconf messages for adding new devices.""" | ||
| self.cancel_any_pending_removal(name, "request to add") | ||
| info = self.__add_plug(zc, type_, name) | ||
| if info: | ||
| asyncio.run_coroutine_threadsafe( | ||
| self._async_service_add(self._plugs[name]), self._hass.loop | ||
| ) | ||
|
|
||
| async def _async_service_add(self, *args): | ||
| """Send add signal to dispatcher.""" | ||
| self.dispatch(ZEROCONF_ADD_PLUG_SIGNAL, *args) | ||
|
|
||
| async def _async_delayed_remove(self, name): | ||
| """Actually process the removal after delay.""" | ||
| try: | ||
| await asyncio.sleep(self._debounce_seconds) | ||
| _LOGGER.info( | ||
| "Request to remove service %s still pending after timeout. Processing remove request... ", | ||
| name, | ||
| ) | ||
| if name in self._plugs: | ||
| data = self._plugs[name].copy() | ||
| del self._plugs[name] | ||
| else: | ||
| data = None | ||
| asyncio.run_coroutine_threadsafe( | ||
| self._async_service_remove(name, data), self._hass.loop | ||
| ) | ||
| except asyncio.CancelledError: | ||
| # Task was cancelled because service came back | ||
| _LOGGER.info( | ||
| "Request to remove service %s was canceled by request to update or add plug. ", | ||
| name, | ||
| ) | ||
| raise | ||
| finally: | ||
| # Either way were done with this task | ||
| self._pending_removals.pop(name, None) | ||
|
|
||
| def remove_service(self, zc, type_, name): | ||
| """Handle zeroconf messages for removal of devices.""" | ||
| if name in self._pending_removals: | ||
| # removal for this service is already pending | ||
| return | ||
|
|
||
| _LOGGER.info("Scheduling removal for %s", name) | ||
| self._pending_removals[name] = asyncio.run_coroutine_threadsafe( | ||
| self._async_delayed_remove(name), self._hass.loop | ||
| ) |
There was a problem hiding this comment.
PowersensorServiceListener._pending_removals is annotated as dict[str, asyncio.Task], but asyncio.run_coroutine_threadsafe(...) returns a concurrent.futures.Future, not an asyncio.Task. This will fail type checking and can hide differences in cancellation/exception handling. Update the type to concurrent.futures.Future[None] (or store the created task differently) and ensure any exceptions from the scheduled coroutine are handled/logged.
| should_poll = False | ||
| _attr_has_entity_name = True | ||
| _attr_available = True | ||
|
|
||
| _ENTITY_CONFIGS = { | ||
| HouseholdMeasurements.POWER_HOME_USE: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Power - Home use", | ||
| device_class=SensorDeviceClass.POWER, | ||
| native_unit_of_measurement=UnitOfPower.WATT, | ||
| state_class=SensorStateClass.MEASUREMENT, | ||
| suggested_display_precision=0, | ||
| formatter=fmt_int, | ||
| event="home_usage", | ||
| ), | ||
| HouseholdMeasurements.POWER_FROM_GRID: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Power - From grid", | ||
| device_class=SensorDeviceClass.POWER, | ||
| native_unit_of_measurement=UnitOfPower.WATT, | ||
| state_class=SensorStateClass.MEASUREMENT, | ||
| suggested_display_precision=0, | ||
| formatter=fmt_int, | ||
| event="from_grid", | ||
| ), | ||
| HouseholdMeasurements.POWER_TO_GRID: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Power - To grid", | ||
| device_class=SensorDeviceClass.POWER, | ||
| native_unit_of_measurement=UnitOfPower.WATT, | ||
| state_class=SensorStateClass.MEASUREMENT, | ||
| suggested_display_precision=0, | ||
| formatter=fmt_int, | ||
| event="to_grid", | ||
| ), | ||
| HouseholdMeasurements.POWER_SOLAR_GENERATION: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Power - Solar generation", | ||
| device_class=SensorDeviceClass.POWER, | ||
| native_unit_of_measurement=UnitOfPower.WATT, | ||
| state_class=SensorStateClass.MEASUREMENT, | ||
| suggested_display_precision=0, | ||
| formatter=fmt_int, | ||
| event="solar_generation", | ||
| ), | ||
| HouseholdMeasurements.ENERGY_HOME_USE: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Energy - Home usage", | ||
| device_class=SensorDeviceClass.ENERGY, | ||
| native_unit_of_measurement=UnitOfEnergy.KILO_WATT_HOUR, | ||
| state_class=SensorStateClass.TOTAL_INCREASING, | ||
| suggested_display_precision=3, | ||
| formatter=fmt_ws_to_kwh, | ||
| event="home_usage_summation", | ||
| ), | ||
| HouseholdMeasurements.ENERGY_FROM_GRID: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Energy - From grid", | ||
| device_class=SensorDeviceClass.ENERGY, | ||
| state_class=SensorStateClass.TOTAL_INCREASING, | ||
| native_unit_of_measurement=UnitOfEnergy.KILO_WATT_HOUR, | ||
| formatter=fmt_ws_to_kwh, | ||
| suggested_display_precision=3, | ||
| event="from_grid_summation", | ||
| ), | ||
| HouseholdMeasurements.ENERGY_TO_GRID: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Energy - To grid", | ||
| device_class=SensorDeviceClass.ENERGY, | ||
| state_class=SensorStateClass.TOTAL_INCREASING, | ||
| native_unit_of_measurement=UnitOfEnergy.KILO_WATT_HOUR, | ||
| formatter=fmt_ws_to_kwh, | ||
| suggested_display_precision=3, | ||
| event="to_grid_summation", | ||
| ), | ||
| HouseholdMeasurements.ENERGY_SOLAR_GENERATION: PowersensorVirtualHouseholdSensorEntityDescription( | ||
| key="Energy - Solar generation", | ||
| device_class=SensorDeviceClass.ENERGY, | ||
| state_class=SensorStateClass.TOTAL_INCREASING, | ||
| native_unit_of_measurement=UnitOfEnergy.KILO_WATT_HOUR, | ||
| formatter=fmt_ws_to_kwh, | ||
| suggested_display_precision=3, | ||
| event="solar_generation_summation", | ||
| ), | ||
| } | ||
|
|
||
| def __init__( | ||
| self, vhh: VirtualHousehold, measurement_type: HouseholdMeasurements | ||
| ) -> None: | ||
| """Initialize the entity.""" | ||
| self._vhh = vhh | ||
| self._attr_should_poll = False | ||
| self._config = self._ENTITY_CONFIGS[measurement_type] | ||
|
|
||
| self._attr_name = self._config.key | ||
| self._attr_unique_id = f"vhh_{self._config.event}" | ||
|
|
||
| self.entity_description = self._config | ||
|
|
There was a problem hiding this comment.
PowersensorHouseholdEntity sets _attr_has_entity_name = True but also sets _attr_name to a fully-qualified, user-facing string (self._config.key). With has_entity_name, the entity name should come from the entity description (and ideally be translatable) while the device name stays stable. Consider moving these labels to translation_key/strings.json and letting Home Assistant compose the display name instead of hard-coding it.
| "identifiers": {(DOMAIN, "vhh")}, | ||
| "manufacturer": "Powersensor", | ||
| "model": "Virtual", | ||
| "name": "Powersensor Household View 🏠", |
There was a problem hiding this comment.
The Virtual Household device name includes an emoji ("Powersensor Household View 🏠"). Device names are user-facing and should be translatable/plain text to avoid inconsistent rendering. Consider using a plain device name and rely on translations (and entity icons) instead of emoji in names.
| "name": "Powersensor Household View 🏠", | |
| "name": "Powersensor Household View", |
| self.entity_description = config | ||
|
|
||
| self._attr_unique_id = f"{mac}_{measurement_type.name}" | ||
| self._attr_device_info = self.device_info |
There was a problem hiding this comment.
Potential AttributeError when accessing device_info property. Line 66 accesses self.device_info, which is an abstract property that raises NotImplementedError (line 75). This will cause an error during initialization of the base class. The assignment should be deferred or removed, as subclasses will override the property.
| self._attr_device_info = self.device_info |
|
|
||
| # check that the right number of entities have been added | ||
| assert len(entities) == 5 | ||
| # @todo: check that the correct entities are created |
There was a problem hiding this comment.
TODO comment left in production code. Line 100 contains a TODO comment about checking that the correct entities are created. This check should be implemented before merging to ensure test completeness, or the TODO should be removed if the check is not needed.
| # @todo: check that the correct entities are created |
| # @todo: delete this block here as well after investigation complete | ||
| if dispatcher._monitor_add_plug_queue is not None: | ||
| dispatcher._monitor_add_plug_queue.cancel() | ||
| try: | ||
| await dispatcher._monitor_add_plug_queue | ||
| except asyncio.CancelledError: | ||
| pass | ||
| finally: | ||
| dispatcher._monitor_add_plug_queue = None | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_dispatcher_monitor_plug_queue( | ||
| monkeypatch: pytest.MonkeyPatch, monkey_patched_dispatcher, network_info | ||
| ) -> None: | ||
| """Test the `enqueue_plug_for_adding` and `process_plug_queue` methods of the dispatcher. | ||
|
|
||
| This test verifies that: | ||
| - The correct API object is created when adding a plug. | ||
| - The queue is properly cleared after adding a plug. | ||
| """ | ||
| dispatcher = monkey_patched_dispatcher | ||
|
|
||
| # mac address known, but not in plugs | ||
| dispatcher._known_plugs.add(MAC) | ||
|
|
||
| assert not dispatcher.plugs | ||
| await dispatcher.enqueue_plug_for_adding(network_info) | ||
| await dispatcher.process_plug_queue() | ||
| for _ in range(3): | ||
| await dispatcher._hass.async_block_till_done() | ||
|
|
||
| # an api object should have been created | ||
| assert MAC in dispatcher.plugs | ||
| # Think this is a sign that the finally block is not running as expected. | ||
| # @todo: investigate dispatcher plug queue watching task cleanup |
There was a problem hiding this comment.
TODO comments left in test code. Lines 136 and 171 contain TODO comments about task cleanup investigation. These should either be addressed or removed before merging to ensure the code is production-ready.
| def __init__(self) -> None: | ||
| """Initialize the config flow.""" | ||
|
|
There was a problem hiding this comment.
Empty __init__ method that provides no functionality. The __init__ method on lines 37-38 is empty and serves no purpose. It can be removed entirely as Python will use the default constructor from the parent class.
| def __init__(self) -> None: | |
| """Initialize the config flow.""" |
| self._timeout = timedelta(seconds=timeout_seconds) # Adjust as needed | ||
|
|
||
| self.measurement_type: MeasurementType = measurement_type | ||
| self.entity_description = input_config[measurement_type] |
There was a problem hiding this comment.
Redundant assignment of entity_description. Lines 61 and 63 both assign to self.entity_description, with the second assignment making the first one unnecessary. Remove line 61 to eliminate the redundancy.
| self.entity_description = input_config[measurement_type] |
| while not self._stop_task and self._plug_added_queue: | ||
| queue_snapshot = self._plug_added_queue.copy() | ||
| for mac_address, host, port, name in queue_snapshot: | ||
| # @todo: maybe better to query the entity registry? |
There was a problem hiding this comment.
TODO comment left in production code. Line 140 contains a TODO comment about querying the entity registry. This check should be implemented or the TODO should be removed before merging to production.
| # @todo: maybe better to query the entity registry? |
| "port": 49476, | ||
| } | ||
| }, | ||
| "with_solar": False, |
There was a problem hiding this comment.
Unused field "with_solar" in test config data. Line 61 defines a "with_solar" field, but this field is not part of the expected config entry data structure according to the comment in __init__.py (lines 31-45) which only mentions "devices" and "roles" at the top level. This field appears to be vestigial and should be removed.
| "with_solar": False, |
| self._hass = hass | ||
| self._plugs: dict[str, dict] = {} | ||
| self._discoveries: dict[str, AsyncServiceInfo] = {} | ||
| self._pending_removals: dict[str, concurrent.futures.Future] = {} |
There was a problem hiding this comment.
Missing import for concurrent.futures. Line 32 uses concurrent.futures.Future in the type hint, but the module is not imported. This will cause a NameError at runtime when the type hint is evaluated.
| await service._async_get_service_info(mock_zc, zc_info.type, zc_info.name) | ||
|
|
||
| assert zc_info.name in service._discoveries | ||
| assert service._discoveries[zc_info.name] == zc_info | ||
|
|
||
| await service._async_get_service_info(mock_zc, zc_info.type, "garbage_name") | ||
| assert "garbage_name" not in service._discoveries |
There was a problem hiding this comment.
Test calls nonexistent method _async_get_service_info. The test on lines 347 and 352 calls service._async_get_service_info(), but this method doesn't exist in the PowersensorServiceListener class. The actual implementation uses synchronous get_service_info from the zeroconf library. This test will fail.
joostlek
left a comment
There was a problem hiding this comment.
Can you maybe elaborate how the device is receiving data? Because there is a lot of zeroconf song and dance, but for me its not really why we need it, also as 1 entry = 1 device.
Also, can we merge the sensor library in a single sensor.py?
There was a problem hiding this comment.
This file should not be committed
There was a problem hiding this comment.
I will fix and I can merge the sensor library into a single sensor.py. It had been that way before...not entirely sure why I decided it was better as a library, but easy enough to merge it back. I'll push those changes up once I fix whatever else is wrong in the CI pipeline here.
Regarding the zeroconf song and dance, I'd definitely like suggestions of an easier way. The basic set up of the devices is that users have plugs and sensors. The plugs are connected to wifi and serve as gateways. The sensors establish a connection to one plug--and only one plug at a time--and relay messages through that plug. Most configurations have more than one plug and users sometimes move the plugs around in their home so they drop an in and out of connection periodically or people add more plugs to their kit. Then the sensors sort of reshuffle who connects to what plug. The zero conf games were mostly to try and safely detect the appearance of new plugs and differentiate genuine removal of the plug vs a temporary interruption of communication. While there are multiple plugs they really all work together and given that sensors can hop from one plug to another it seemed like they all belonged within the same integration--having one instance per plug seemed like it was going to create headaches with uniqueness of the sensor entities and be more cumbersome for users.
There was a problem hiding this comment.
@joostlek To add a few more details around the topology. As far as we've been able to tell, the HA abstractions do not map cleanly to the device architecture of the Powersensor solution. We two kinds of physical devices:
- plugs, which are always-on devices providing both energy data and the local API
- sensors, which are mostly-sleeping devices, providing energy data only
Any given installation will comprise one or more plugs, which are present on the network, and one or more sensors, which are not. The sensors, due to being battery operated, only communicate using low-power radio to a single plug within range, which then relays the data for the sensor and makes it available on the local WiFi network. A typical installation will have a sensor monitoring the whole-of-house energy use, and another sensor monitoring solar generation (if solar is present), and maybe one or more for hardwired appliances. As sensors only do net metering, and HA requires split import/export, the relevant sensor readings are fed into the "virtual household" abstraction to produce individual readings.
So, we have a situation where sensors are not directly discoverable, nor interactable with. Plugs acts as hubs for them. But the set of plugs are collectively the hub for the sensors, not individually, as sensors may roam between them at will according to radio conditions. Additionally, the virtual household device needs to sit on top of all of those devices in order to see the necessary data. The only sensible mapping to HA abstractions that we could see was to use a single config entry, designate it as a hub integration, and do, as you say, the "zeroconf song and dance".
And because a diagram is worth a thousand words or something.
A minimal Powersensor topology looks like:
[virtual household]
|
[plug]
|
[sensor]
But with more devices, it could be something like:
[virtual household]
/ | | \
[plug] [plug] [plug] [plug]
| | \
[sensor] [sensor] [sensor]
only to an hour later look like:
[virtual household]
/ | | \
[plug] [plug] [plug] [plug]
/ | \
[sensor] [sensor] [sensor]
and if we overlay the role information, it'd look like:
[ virtual household ]
/ | | \
[appliance] [appliance] [appliance] [appliance]
/ | |
[solar-net] [appliance] [house-net]
^^^^^^^ ^^^^^^^
required by required by
virtual household virtual household
As Powersensor is a completely DIY installation as opposed to relying on a "qualified installer", there's a near endless amount of variation in installation scenarios and resulting topologies.
…tcher.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…vice.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…tcher.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…tcher.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
16a3710 to
c118e5f
Compare
Breaking change
Proposed change
Type of change
Additional information
Checklist
ruff format homeassistant tests)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest.requirements_all.txt.Updated by running
python3 -m script.gen_requirements_all.To help with the load of incoming pull requests: