La description :menu skip to content home whoami talks py3status desktop authenticating and connecting to a ssl enabled scylla cluster using spark 2 1 reply this quick article is a wrap up for reference on how to con...
Classement Alexa Global: # 3,506,851
Server:Apache... X-Powered-By:PHP/5.2.17
L'adresse IP principale: 213.186.33.87,Votre serveur France,Roubaix
ISP:OVH SAS TLD:fr Code postal:fr
Ce rapport est mis à jour en 28-Jul-2018
Created Date:
2006-11-17
Changed Date:
2017-09-28
Expires Date:
2018-11-17
Données techniques du ultrabug.fr
Geo IP vous fournit comme la latitude, la longitude et l'ISP (Internet Service Provider) etc. informations.
Notre service GeoIP a trouvé l'hôte ultrabug.fr.Actuellement, hébergé dans France et son fournisseur de services est OVH SAS .
Les informations d'en-tête HTTP font partie du protocole HTTP que le navigateur d'un utilisateur envoie à appelé Apache contenant les détails de ce que le navigateur veut et acceptera de nouveau du serveur Web.
MX preference = 1, mail exchanger = redirect.ovh.net.
HtmlToText
menu skip to content home whoami talks py3status desktop authenticating and connecting to a ssl enabled scylla cluster using spark 2 1 reply this quick article is a wrap up for reference on how to connect to scylladb using spark 2 when authentication and ssl are enforced for the clients on the scylla cluster. we encountered multiple problems, even more since we distribute our workload using a yarn cluster so that our worker nodes should have everything they need to connect properly to scylla. we found very little help online so i hope it will serve anyone facing similar issues (that’s also why i copy/pasted them here). the authentication part is easy going by itself and was not the source of our problems, ssl on the client side was. environment (py)spark: 2.1.0.cloudera2 spark-cassandra-connector: datastax:spark-cassandra-connector: 2.0.1-s_2.11 python: 3.5.5 java: 1.8.0_144 scylladb: 2.1.5 ssl cipher setup the datastax spark cassandra driver uses default the tls_rsa_with_aes_256_cbc_sha cipher that the jvm does not support by default. this raises the following error when connecting to scylla: 18/07/18 13:13:41 warn channel.channelinitializer: failed to initialize a channel. closing: [id: 0x8d6f78a7] java.lang.illegalargumentexception: cannot support tls_rsa_with_aes_256_cbc_sha with currently installed providers according to the ssl documentation we have two ciphers available: tls_rsa_with_aes_256_cbc_sha tls_rsa_with_aes_128_cbc_sha we can get get rid of the error by lowering the cipher to tls_rsa_with_aes_128_cbc_sha using the following configuration: .config("spark.cassandra.connection.ssl.enabledalgorithms", "tls_rsa_with_aes_128_cbc_sha")\ however, this is not really a good solution and instead we’d be inclined to use the tls_rsa_with_aes_256_cbc_sha version. for this we need to follow this datastax’s procedure . then we need to deploy the jce security jars on our all client nodes , if using yarn like us this means that you have to deploy these jars to all your nodemanager nodes. for example by hand: # unzip jce_policy-8.zip # cp unlimitedjcepolicyjdk8/*.jar /opt/oracle-jdk-bin-1.8.0.144/jre/lib/security/ java trust store when connecting, the clients need to be able to validate the scylla cluster’s self-signed ca. this is done by setting up a truststore jks file and providing it to the spark connector configuration (note that you protect this file with a password). keystore vs truststore in ssl handshake purpose of truststore is to verify credentials and purpose of keystore is to provide credentials . keystore in java stores private key and certificates corresponding to the public keys and is required if you are a ssl server or ssl requires client authentication. truststore stores certificates from third parties or your own self-signed certificates, your application identify and validates them using this truststore. the spark-cassandra-connector documentation has two options to handle keystore and truststore. when we did not use the truststore option, we would get some obscure error when connecting to scylla: com.datastax.driver.core.exceptions.transportexception: [node/1.1.1.1:9042] channel has been closed when enabling debug logging, we get a clearer error which indicated a failure in validating the ssl certificate provided by the scylla server node: caused by: sun.security.validator.validatorexception: pkix path building failed: sun.security.provider.certpath.suncertpathbuilderexception: unable to find valid certification path to requested target setting up the truststore jks you need to have the self-signed ca public certificate file, then issue the following command: # keytool -importcert -file /usr/local/share/ca-certificates/my_self_signed_ca.crt -keystore company_truststore.jks -noprompt enter keystore password: re-enter new password: certificate was added to keystore using the truststore now you need to configure spark to use the truststore like this: .config("spark.cassandra.connection.ssl.truststore.password", "password")\ .config("spark.cassandra.connection.ssl.truststore.path", "company_truststore.jks")\ spark ssl configuration example this wraps up the ssl connection configuration used for spark. this example uses pyspark2 and reads a table in scylla from a yarn cluster: $ pyspark2 --packages datastax:spark-cassandra-connector:2.0.1-s_2.11 --files company_truststore.jks >>> spark = sparksession.builder.appname("scylla_app")\ .config("spark.cassandra.auth.password", "test")\ .config("spark.cassandra.auth.username", "test")\ .config("spark.cassandra.connection.host", "node1,node2,node3")\ .config("spark.cassandra.connection.ssl.clientauth.enabled", true)\ .config("spark.cassandra.connection.ssl.enabled", true)\ .config("spark.cassandra.connection.ssl.truststore.password", "password")\ .config("spark.cassandra.connection.ssl.truststore.path", "company_truststore.jks")\ .config("spark.cassandra.input.split.size_in_mb", 1)\ .config("spark.yarn.queue", "scylla_queue").getorcreate() >>> df = spark.read.format("org.apache.spark.sql.cassandra").options(table="my_table", keyspace="test").load() >>> df.show() this entry was posted in linux and tagged gentoo , scylla on 2018/07/19 by . a botspot story 2 replies i felt like sharing a recent story that allowed us identify a bot in a haystack thanks to scylla. the scenario while working on loading up 2b+ of rows into scylla from hive (using spark), we noticed a strange behaviour in the performances of one of our nodes: so we started wondering why that server in blue was having those peaks of load and was clearly diverging from the two others… as we obviously expect the three nodes to behave the same , there were two options on the table: hardware problem on the node bad data distribution (bad schema design? consistent hash problem?) we shared this with our pals from scylladb and started working on finding out what was going on. the investigation hardware? hardware problem was pretty quickly evicted, nothing showed up on the monitoring and on the kernel logs. i/o queues and throughput were good: data distribution? avi kivity (scylladb’s cto) quickly got the feeling that something was wrong with the data distribution and that we could be facing a hotspot situation . he quickly nailed it down to shard 44 thanks to the scylla-grafana-monitoring platform. data is distributed between shards that are stored on nodes (consistent hash ring). this distribution is done by hashing the primary key of your data which dictates the shard it belongs to (and thus the node(s) where the shard is stored). if one of your keys is over represented in your original data set, then the shard it belongs to can be overly populated and the related node overloaded. this is called a hotspot situation . tracing queries the first step was to trace queries in scylla to try to get deeper into the hotspot analysis. so we enabled tracing using the following formula to get about 1 trace per second in the system_traces namespace. tracing probability = 1 / expected requests per second throughput in our case, we were doing between 90k req/s and 150k req/s so we settled for 100k req/s to be safe and enabled tracing on our nodes like this: # nodetool settraceprobability 0.00001 turns out tracing didn’t help very much in our case because the traces do not include the query parameters in scylla 2.1, it is becoming available in the soon to be released 2.2 version. note : traces expire on the tables, make sure your truncate the events and sessions tables while iterating. else you will have to wait for the next gc_grace_period (10 days by default) before they are actually removed. if you do not do that and generate millions of traces like we did, querying the mentioned tables will likely time out because of the “tombstoned” rows even if there is no trace inside any more. looking at cfhistograms glauber costa was also helping on the case and got us looking at the cfhistograms of the tables we were pushing data to. t
Whois est un protocole qui permet d'accéder aux informations d'enregistrement.Vous pouvez atteindre quand le site Web a été enregistré, quand il va expirer, quelles sont les coordonnées du site avec les informations suivantes. En un mot, il comprend ces informations;
%% %% This is the AFNIC Whois server. %% %% complete date format : DD/MM/YYYY %% short date format : DD/MM %% version : FRNIC-2.5 %% %% Rights restricted by copyright. %% See https://www.afnic.fr/en/products-and-services/services/whois/whois-special-notice/ %% %% Use '-h' option to obtain more information about this service. %% %% [2600:3c03:0000:0000:f03c:91ff:feae:779d REQUEST] >> ultrabug.fr %% %% RL Net [##########] - RL IP [#########.] %%
nic-hdl: ANO00-FRNIC type: PERSON contact: Ano Nymous remarks: -------------- WARNING -------------- remarks: While the registrar knows him/her, remarks: this person chose to restrict access remarks: to his/her personal data. So PLEASE, remarks: don't send emails to Ano Nymous. This remarks: address is bogus and there is no hope remarks: of a reply. remarks: -------------- WARNING -------------- registrar: OVH changed: 02/06/2018 anonymous@anonymous anonymous: YES obsoleted: NO eligstatus: not identified reachstatus: not identified source: FRNIC
nic-hdl: ANO00-FRNIC type: PERSON contact: Ano Nymous remarks: -------------- WARNING -------------- remarks: While the registrar knows him/her, remarks: this person chose to restrict access remarks: to his/her personal data. So PLEASE, remarks: don't send emails to Ano Nymous. This remarks: address is bogus and there is no hope remarks: of a reply. remarks: -------------- WARNING -------------- registrar: OVH changed: 13/09/2016 anonymous@anonymous anonymous: YES obsoleted: NO eligstatus: not identified reachstatus: not identified source: FRNIC
nic-hdl: OVH5-FRNIC type: ROLE contact: OVH NET address: OVH address: 140, quai du Sartel address: 59100 Roubaix country: FR phone: +33 8 99 70 17 61 e-mail: tech@ovh.net trouble: Information: http://www.ovh.fr trouble: Questions: mailto:tech@ovh.net trouble: Spam: mailto:abuse@ovh.net admin-c: OK217-FRNIC tech-c: OK217-FRNIC notify: tech@ovh.net registrar: OVH changed: 11/10/2006 tech@ovh.net anonymous: NO obsoleted: NO eligstatus: not identified reachstatus: not identified source: FRNIC
REFERRER http://www.nic.fr
REGISTRAR AFNIC
SERVERS
SERVER fr.whois-servers.net
ARGS ultrabug.fr
PORT 43
TYPE domain RegrInfo DISCLAIMER % % This is the AFNIC Whois server. % % complete date format : DD/MM/YYYY % short date format : DD/MM % version : FRNIC-2.5 % % Rights restricted by copyright. % See https://www.afnic.fr/en/products-and-services/services/whois/whois-special-notice/ % % Use '-h' option to obtain more information about this service. % % [2600:3c03:0000:0000:f03c:91ff:feae:779d REQUEST] >> ultrabug.fr % % RL Net [##########] - RL IP [#########.] %
REGISTERED yes
ADMIN
HANDLE ANO00-FRNIC
TYPE PERSON
CONTACT Ano Nymous
REMARKS -------------- WARNING -------------- While the registrar knows him/her, this person chose to restrict access to his/her personal data. So PLEASE, don't send emails to Ano Nymous. This address is bogus and there is no hope of a reply. -------------- WARNING --------------
REMARKS -------------- WARNING -------------- While the registrar knows him/her, this person chose to restrict access to his/her personal data. So PLEASE, don't send emails to Ano Nymous. This address is bogus and there is no hope of a reply. -------------- WARNING --------------
Nous utilisons des cookies pour personnaliser votre expérience sur notre site. En poursuivant votre navigation, vous acceptez cette utilisation. Apprendre encore plus