Jump to content

Mark Derbecker

Seeq Team
  • Posts

    74
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Mark Derbecker

  1. Hi Nate, There are currently no storage retention limitations on such data. Seeq stores this data in its main data storage area (along with workbook definitions, calculation definitions etc). Seeq is not meant to be a "system of record" for pushed data -- we recommend that you always retain the ability to recreate this data from original sources if necessary. But to be clear, Seeq will currently not pare down this data using any sort of retention policy. Mark
  2. Frequently Asked Questions regarding the Log4Shell vulnerability and Seeq Does Seeq or any of its constituent 3rd-party components use Log4J? No. Seeq uses a library called Logback, which is not vulnerable to the high-criticality Log4Shell exploit (CVE-2021-44228). There is a related vulnerability (CVE-2021-42550) in Logback that the security community has deemed as medium criticality given that it requires access to logback's configuration file by the attacker, sign of an already compromised system. According to Seeq's 3rd Party Vulnerability assessment process, CVE-2021-42550 has been assessed as low criticality in the context of the Seeq system but the library will nonetheless be upgraded as part of an upcoming Seeq point release. My virus scanner is flagging log4j-over-slf4j-1.7.7.jar as vulnerable. What's up with that? Virus scanners, depending on their level of sophistication, may flag a file named cassandra/files/lib/log4j-over-slf4j-1.7.7.jar in the Seeq installation folder for Seeq versions R50 and older. This component does not suffer from the Log4Shell vulnerability, it is an adapter layer that is only used if Log4J is also used somewhere in the system. See http://slf4j.org/log4shell.html for more information. The component was part of the Cassandra NoSQL database system, a sub-component of the overall Seeq system in versions R50 and earlier. Cassandra has been removed in R51 and later. I see something called log4js and log4javascript in the webserver folder of the Seeq installation. What about that? Those files are related to a Javascript library that performs a similar logging function as Log4j but does not contain the vulnerability. More information: https://github.com/log4js-node/log4js-node/issues/1105
  3. You're doing the right thing Ivan. SPy should have a better avenue for handling enums. I've logged a feature request, and internally we're tracking it as CRAB-23208.
  4. Thorsten, I tracked this down to a non-obvious issue with your initial push wherein you specified `Value Unit of Measure` instead of `Value Unit Of Measure`. (Notice the case difference on the word "of".) Unfortunately due to another issue with Seeq Server, once you push with the wrong case, the signal becomes "tainted" and you can't make any changes to it via a normal spy.push. As you found, you can only archive it with the SDK. I'm going to add appropriate guards and error messages to spy.push() that prevent inadvertent use of the wrong case, and I'll also log a bug report in our system to fix Seeq Server.
  5. Ahhh, yes I see. That won't work because it's rejecting the overall POST. I'll think about how to possibly fix.
  6. With respect to archiving, I think if you change `Archived` to `True` in the DataFrame and then `spy.push()` that item, it should mark it as Archived. Have you tried that? Note that the `spy.push(archive)` argument is a little misleading-- it is used for another purpose. I'll think about whether to rename that for clarity.
  7. Hi Thorsten, In an upcoming version of Seeq Server (R22.0.46.xx) it will be possible for SPy to verify units prior to pushing. (We needed to fix a problem in the REST API that was preventing SPy from querying Seeq Server for the list of supported units.) I'll circle back to this thread to notify when SPy has the capability, but keep in mind that you will need to upgrade Seeq Server as well. Mark
  8. I see that mistake Thorsten, we'll fix it. Thanks for reporting it.
  9. Thorsten, SPy module version 136 has a fix for the timezone issue, you can upgrade if you'd like to try it out.
  10. Hi Thorsten, I think something's amiss with how we handle timezones when start and end are omitted. `result1` is in a timezone offset that is one hour different from `result2`, and is therefore in the future where there is no data ... I'll investigate.
×
×
  • Create New...